Test Report: KVM_Linux_crio 19409

                    
                      edd4f56319c0ca210375a4ae17d17ce22fec0e34:2024-08-12:35748
                    
                

Test fail (29/326)

Order failed test Duration
43 TestAddons/parallel/Ingress 153.17
45 TestAddons/parallel/MetricsServer 334.68
54 TestAddons/StoppedEnableDisable 154.29
173 TestMultiControlPlane/serial/StopSecondaryNode 141.99
175 TestMultiControlPlane/serial/RestartSecondaryNode 58.67
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374.85
180 TestMultiControlPlane/serial/StopCluster 141.88
240 TestMultiNode/serial/RestartKeepsNodes 331.87
242 TestMultiNode/serial/StopMultiNode 141.45
249 TestPreload 269.58
257 TestKubernetesUpgrade 726.23
293 TestStartStop/group/old-k8s-version/serial/FirstStart 290.72
307 TestStartStop/group/embed-certs/serial/Stop 138.91
310 TestStartStop/group/no-preload/serial/Stop 139.11
311 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 104.09
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/old-k8s-version/serial/SecondStart 740.58
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.55
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.56
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.86
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.98
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 385.95
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 358.92
333 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 118.63
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 167.08
x
+
TestAddons/parallel/Ingress (153.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-883541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-883541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-883541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ad4b39e3-5426-4eb3-96c3-66ba2085da60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ad4b39e3-5426-4eb3-96c3-66ba2085da60] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004513006s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-883541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.42725871s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-883541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.215
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable ingress-dns --alsologtostderr -v=1: (1.014405001s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable ingress --alsologtostderr -v=1: (7.765457657s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-883541 -n addons-883541
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 logs -n 25: (1.185584243s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-652906                                                                     | download-only-652906 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:21 UTC |
	| delete  | -p download-only-850332                                                                     | download-only-850332 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-087798 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | binary-mirror-087798                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38789                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-087798                                                                     | binary-mirror-087798 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-883541 --wait=true                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:23 UTC | 12 Aug 24 10:23 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:23 UTC | 12 Aug 24 10:24 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-883541 ssh cat                                                                       | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | /opt/local-path-provisioner/pvc-1f7cbad0-48c1-4940-b719-ed56d7f5b5f3_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-883541 ip                                                                            | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | -p addons-883541                                                                            |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | -p addons-883541                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-883541 ssh curl -s                                                                   | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-883541 addons                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:25 UTC | 12 Aug 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-883541 addons                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:25 UTC | 12 Aug 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-883541 ip                                                                            | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:21:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:21:08.010162   11941 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:21:08.010413   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:21:08.010423   11941 out.go:304] Setting ErrFile to fd 2...
	I0812 10:21:08.010429   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:21:08.010649   11941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:21:08.011274   11941 out.go:298] Setting JSON to false
	I0812 10:21:08.012117   11941 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":209,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:21:08.012179   11941 start.go:139] virtualization: kvm guest
	I0812 10:21:08.014249   11941 out.go:177] * [addons-883541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:21:08.015719   11941 notify.go:220] Checking for updates...
	I0812 10:21:08.015736   11941 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:21:08.017075   11941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:21:08.018615   11941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:21:08.020026   11941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:08.021255   11941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:21:08.022824   11941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:21:08.024404   11941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:21:08.057326   11941 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 10:21:08.058616   11941 start.go:297] selected driver: kvm2
	I0812 10:21:08.058630   11941 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:21:08.058644   11941 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:21:08.059335   11941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:21:08.059425   11941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:21:08.074950   11941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:21:08.075013   11941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:21:08.075258   11941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:21:08.075288   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:08.075298   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:08.075309   11941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:21:08.075388   11941 start.go:340] cluster config:
	{Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:21:08.075506   11941 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:21:08.077605   11941 out.go:177] * Starting "addons-883541" primary control-plane node in "addons-883541" cluster
	I0812 10:21:08.079120   11941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:21:08.079168   11941 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:21:08.079181   11941 cache.go:56] Caching tarball of preloaded images
	I0812 10:21:08.079273   11941 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:21:08.079285   11941 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:21:08.079596   11941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json ...
	I0812 10:21:08.079622   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json: {Name:mkb5800adfa9cd219cce82c1061d5731703702f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:08.079781   11941 start.go:360] acquireMachinesLock for addons-883541: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:21:08.079838   11941 start.go:364] duration metric: took 42.414µs to acquireMachinesLock for "addons-883541"
	I0812 10:21:08.079863   11941 start.go:93] Provisioning new machine with config: &{Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:21:08.079935   11941 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 10:21:08.081850   11941 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 10:21:08.082017   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:21:08.082068   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:21:08.096756   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0812 10:21:08.097267   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:21:08.097886   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:21:08.097916   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:21:08.098243   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:21:08.098451   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:08.098620   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:08.098767   11941 start.go:159] libmachine.API.Create for "addons-883541" (driver="kvm2")
	I0812 10:21:08.098797   11941 client.go:168] LocalClient.Create starting
	I0812 10:21:08.098836   11941 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:21:08.180288   11941 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:21:08.383408   11941 main.go:141] libmachine: Running pre-create checks...
	I0812 10:21:08.383432   11941 main.go:141] libmachine: (addons-883541) Calling .PreCreateCheck
	I0812 10:21:08.383947   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:08.384420   11941 main.go:141] libmachine: Creating machine...
	I0812 10:21:08.384434   11941 main.go:141] libmachine: (addons-883541) Calling .Create
	I0812 10:21:08.384605   11941 main.go:141] libmachine: (addons-883541) Creating KVM machine...
	I0812 10:21:08.385902   11941 main.go:141] libmachine: (addons-883541) DBG | found existing default KVM network
	I0812 10:21:08.386616   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.386431   11963 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0812 10:21:08.386638   11941 main.go:141] libmachine: (addons-883541) DBG | created network xml: 
	I0812 10:21:08.386648   11941 main.go:141] libmachine: (addons-883541) DBG | <network>
	I0812 10:21:08.386653   11941 main.go:141] libmachine: (addons-883541) DBG |   <name>mk-addons-883541</name>
	I0812 10:21:08.386659   11941 main.go:141] libmachine: (addons-883541) DBG |   <dns enable='no'/>
	I0812 10:21:08.386666   11941 main.go:141] libmachine: (addons-883541) DBG |   
	I0812 10:21:08.386700   11941 main.go:141] libmachine: (addons-883541) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 10:21:08.386713   11941 main.go:141] libmachine: (addons-883541) DBG |     <dhcp>
	I0812 10:21:08.386723   11941 main.go:141] libmachine: (addons-883541) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 10:21:08.386731   11941 main.go:141] libmachine: (addons-883541) DBG |     </dhcp>
	I0812 10:21:08.386764   11941 main.go:141] libmachine: (addons-883541) DBG |   </ip>
	I0812 10:21:08.386786   11941 main.go:141] libmachine: (addons-883541) DBG |   
	I0812 10:21:08.386796   11941 main.go:141] libmachine: (addons-883541) DBG | </network>
	I0812 10:21:08.386804   11941 main.go:141] libmachine: (addons-883541) DBG | 
	I0812 10:21:08.392398   11941 main.go:141] libmachine: (addons-883541) DBG | trying to create private KVM network mk-addons-883541 192.168.39.0/24...
	I0812 10:21:08.460813   11941 main.go:141] libmachine: (addons-883541) DBG | private KVM network mk-addons-883541 192.168.39.0/24 created
	I0812 10:21:08.460853   11941 main.go:141] libmachine: (addons-883541) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 ...
	I0812 10:21:08.460882   11941 main.go:141] libmachine: (addons-883541) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:21:08.460900   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.460784   11963 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:08.460984   11941 main.go:141] libmachine: (addons-883541) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:21:08.743896   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.743778   11963 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa...
	I0812 10:21:09.002621   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:09.002470   11963 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/addons-883541.rawdisk...
	I0812 10:21:09.002642   11941 main.go:141] libmachine: (addons-883541) DBG | Writing magic tar header
	I0812 10:21:09.002679   11941 main.go:141] libmachine: (addons-883541) DBG | Writing SSH key tar header
	I0812 10:21:09.002687   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:09.002585   11963 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 ...
	I0812 10:21:09.002698   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541
	I0812 10:21:09.002712   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 (perms=drwx------)
	I0812 10:21:09.002730   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:21:09.002772   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:21:09.002799   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:21:09.002822   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:21:09.002841   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:09.002853   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:21:09.002870   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:21:09.002878   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:21:09.002885   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home
	I0812 10:21:09.002898   11941 main.go:141] libmachine: (addons-883541) DBG | Skipping /home - not owner
	I0812 10:21:09.002932   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:21:09.002952   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:21:09.002961   11941 main.go:141] libmachine: (addons-883541) Creating domain...
	I0812 10:21:09.003924   11941 main.go:141] libmachine: (addons-883541) define libvirt domain using xml: 
	I0812 10:21:09.003948   11941 main.go:141] libmachine: (addons-883541) <domain type='kvm'>
	I0812 10:21:09.003967   11941 main.go:141] libmachine: (addons-883541)   <name>addons-883541</name>
	I0812 10:21:09.003980   11941 main.go:141] libmachine: (addons-883541)   <memory unit='MiB'>4000</memory>
	I0812 10:21:09.004006   11941 main.go:141] libmachine: (addons-883541)   <vcpu>2</vcpu>
	I0812 10:21:09.004026   11941 main.go:141] libmachine: (addons-883541)   <features>
	I0812 10:21:09.004039   11941 main.go:141] libmachine: (addons-883541)     <acpi/>
	I0812 10:21:09.004048   11941 main.go:141] libmachine: (addons-883541)     <apic/>
	I0812 10:21:09.004056   11941 main.go:141] libmachine: (addons-883541)     <pae/>
	I0812 10:21:09.004063   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004069   11941 main.go:141] libmachine: (addons-883541)   </features>
	I0812 10:21:09.004076   11941 main.go:141] libmachine: (addons-883541)   <cpu mode='host-passthrough'>
	I0812 10:21:09.004081   11941 main.go:141] libmachine: (addons-883541)   
	I0812 10:21:09.004093   11941 main.go:141] libmachine: (addons-883541)   </cpu>
	I0812 10:21:09.004110   11941 main.go:141] libmachine: (addons-883541)   <os>
	I0812 10:21:09.004128   11941 main.go:141] libmachine: (addons-883541)     <type>hvm</type>
	I0812 10:21:09.004138   11941 main.go:141] libmachine: (addons-883541)     <boot dev='cdrom'/>
	I0812 10:21:09.004148   11941 main.go:141] libmachine: (addons-883541)     <boot dev='hd'/>
	I0812 10:21:09.004158   11941 main.go:141] libmachine: (addons-883541)     <bootmenu enable='no'/>
	I0812 10:21:09.004167   11941 main.go:141] libmachine: (addons-883541)   </os>
	I0812 10:21:09.004178   11941 main.go:141] libmachine: (addons-883541)   <devices>
	I0812 10:21:09.004188   11941 main.go:141] libmachine: (addons-883541)     <disk type='file' device='cdrom'>
	I0812 10:21:09.004208   11941 main.go:141] libmachine: (addons-883541)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/boot2docker.iso'/>
	I0812 10:21:09.004225   11941 main.go:141] libmachine: (addons-883541)       <target dev='hdc' bus='scsi'/>
	I0812 10:21:09.004238   11941 main.go:141] libmachine: (addons-883541)       <readonly/>
	I0812 10:21:09.004247   11941 main.go:141] libmachine: (addons-883541)     </disk>
	I0812 10:21:09.004253   11941 main.go:141] libmachine: (addons-883541)     <disk type='file' device='disk'>
	I0812 10:21:09.004265   11941 main.go:141] libmachine: (addons-883541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:21:09.004276   11941 main.go:141] libmachine: (addons-883541)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/addons-883541.rawdisk'/>
	I0812 10:21:09.004283   11941 main.go:141] libmachine: (addons-883541)       <target dev='hda' bus='virtio'/>
	I0812 10:21:09.004288   11941 main.go:141] libmachine: (addons-883541)     </disk>
	I0812 10:21:09.004294   11941 main.go:141] libmachine: (addons-883541)     <interface type='network'>
	I0812 10:21:09.004301   11941 main.go:141] libmachine: (addons-883541)       <source network='mk-addons-883541'/>
	I0812 10:21:09.004307   11941 main.go:141] libmachine: (addons-883541)       <model type='virtio'/>
	I0812 10:21:09.004313   11941 main.go:141] libmachine: (addons-883541)     </interface>
	I0812 10:21:09.004319   11941 main.go:141] libmachine: (addons-883541)     <interface type='network'>
	I0812 10:21:09.004325   11941 main.go:141] libmachine: (addons-883541)       <source network='default'/>
	I0812 10:21:09.004332   11941 main.go:141] libmachine: (addons-883541)       <model type='virtio'/>
	I0812 10:21:09.004337   11941 main.go:141] libmachine: (addons-883541)     </interface>
	I0812 10:21:09.004344   11941 main.go:141] libmachine: (addons-883541)     <serial type='pty'>
	I0812 10:21:09.004349   11941 main.go:141] libmachine: (addons-883541)       <target port='0'/>
	I0812 10:21:09.004363   11941 main.go:141] libmachine: (addons-883541)     </serial>
	I0812 10:21:09.004370   11941 main.go:141] libmachine: (addons-883541)     <console type='pty'>
	I0812 10:21:09.004380   11941 main.go:141] libmachine: (addons-883541)       <target type='serial' port='0'/>
	I0812 10:21:09.004395   11941 main.go:141] libmachine: (addons-883541)     </console>
	I0812 10:21:09.004412   11941 main.go:141] libmachine: (addons-883541)     <rng model='virtio'>
	I0812 10:21:09.004427   11941 main.go:141] libmachine: (addons-883541)       <backend model='random'>/dev/random</backend>
	I0812 10:21:09.004436   11941 main.go:141] libmachine: (addons-883541)     </rng>
	I0812 10:21:09.004447   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004456   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004468   11941 main.go:141] libmachine: (addons-883541)   </devices>
	I0812 10:21:09.004477   11941 main.go:141] libmachine: (addons-883541) </domain>
	I0812 10:21:09.004487   11941 main.go:141] libmachine: (addons-883541) 
	I0812 10:21:09.010499   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:50:75:f2 in network default
	I0812 10:21:09.011103   11941 main.go:141] libmachine: (addons-883541) Ensuring networks are active...
	I0812 10:21:09.011129   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:09.011764   11941 main.go:141] libmachine: (addons-883541) Ensuring network default is active
	I0812 10:21:09.012067   11941 main.go:141] libmachine: (addons-883541) Ensuring network mk-addons-883541 is active
	I0812 10:21:09.012516   11941 main.go:141] libmachine: (addons-883541) Getting domain xml...
	I0812 10:21:09.013134   11941 main.go:141] libmachine: (addons-883541) Creating domain...
	I0812 10:21:10.424149   11941 main.go:141] libmachine: (addons-883541) Waiting to get IP...
	I0812 10:21:10.424797   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.425212   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.425285   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.425161   11963 retry.go:31] will retry after 205.860955ms: waiting for machine to come up
	I0812 10:21:10.632616   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.633142   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.633168   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.633088   11963 retry.go:31] will retry after 339.919384ms: waiting for machine to come up
	I0812 10:21:10.974737   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.975182   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.975213   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.975124   11963 retry.go:31] will retry after 380.644279ms: waiting for machine to come up
	I0812 10:21:11.357601   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:11.357921   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:11.357947   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:11.357868   11963 retry.go:31] will retry after 544.700698ms: waiting for machine to come up
	I0812 10:21:11.904505   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:11.904933   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:11.904962   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:11.904899   11963 retry.go:31] will retry after 662.908472ms: waiting for machine to come up
	I0812 10:21:12.569947   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:12.570484   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:12.570523   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:12.570408   11963 retry.go:31] will retry after 790.630659ms: waiting for machine to come up
	I0812 10:21:13.363042   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:13.363514   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:13.363539   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:13.363476   11963 retry.go:31] will retry after 901.462035ms: waiting for machine to come up
	I0812 10:21:14.267066   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:14.267503   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:14.267533   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:14.267465   11963 retry.go:31] will retry after 1.021341432s: waiting for machine to come up
	I0812 10:21:15.290676   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:15.291073   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:15.291096   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:15.291030   11963 retry.go:31] will retry after 1.713051639s: waiting for machine to come up
	I0812 10:21:17.006538   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:17.006931   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:17.006960   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:17.006881   11963 retry.go:31] will retry after 1.554642738s: waiting for machine to come up
	I0812 10:21:18.563773   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:18.564315   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:18.564343   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:18.564269   11963 retry.go:31] will retry after 1.7660377s: waiting for machine to come up
	I0812 10:21:20.331974   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:20.332362   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:20.332385   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:20.332320   11963 retry.go:31] will retry after 2.252678642s: waiting for machine to come up
	I0812 10:21:22.587099   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:22.587579   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:22.587603   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:22.587553   11963 retry.go:31] will retry after 3.950816065s: waiting for machine to come up
	I0812 10:21:26.542025   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:26.542518   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:26.542552   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:26.542434   11963 retry.go:31] will retry after 3.939180324s: waiting for machine to come up
	I0812 10:21:30.484567   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.485187   11941 main.go:141] libmachine: (addons-883541) Found IP for machine: 192.168.39.215
	I0812 10:21:30.485204   11941 main.go:141] libmachine: (addons-883541) Reserving static IP address...
	I0812 10:21:30.485232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has current primary IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.485687   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find host DHCP lease matching {name: "addons-883541", mac: "52:54:00:63:c3:eb", ip: "192.168.39.215"} in network mk-addons-883541
	I0812 10:21:30.582378   11941 main.go:141] libmachine: (addons-883541) Reserved static IP address: 192.168.39.215
	I0812 10:21:30.582408   11941 main.go:141] libmachine: (addons-883541) Waiting for SSH to be available...
	I0812 10:21:30.582417   11941 main.go:141] libmachine: (addons-883541) DBG | Getting to WaitForSSH function...
	I0812 10:21:30.585422   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.585953   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.585987   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.586233   11941 main.go:141] libmachine: (addons-883541) DBG | Using SSH client type: external
	I0812 10:21:30.586264   11941 main.go:141] libmachine: (addons-883541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa (-rw-------)
	I0812 10:21:30.586342   11941 main.go:141] libmachine: (addons-883541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:21:30.586363   11941 main.go:141] libmachine: (addons-883541) DBG | About to run SSH command:
	I0812 10:21:30.586383   11941 main.go:141] libmachine: (addons-883541) DBG | exit 0
	I0812 10:21:30.716970   11941 main.go:141] libmachine: (addons-883541) DBG | SSH cmd err, output: <nil>: 
	I0812 10:21:30.717266   11941 main.go:141] libmachine: (addons-883541) KVM machine creation complete!
	I0812 10:21:30.717681   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:30.718229   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:30.718428   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:30.718640   11941 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:21:30.718657   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:21:30.720022   11941 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:21:30.720038   11941 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:21:30.720045   11941 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:21:30.720053   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.722434   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.722825   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.722851   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.723008   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.723192   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.723354   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.723490   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.723650   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.723830   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.723840   11941 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:21:30.820163   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:21:30.820183   11941 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:21:30.820190   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.823026   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.823375   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.823401   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.823618   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.823863   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.824049   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.824232   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.824420   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.824657   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.824674   11941 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:21:30.921757   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:21:30.921839   11941 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:21:30.921852   11941 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:21:30.921862   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:30.922116   11941 buildroot.go:166] provisioning hostname "addons-883541"
	I0812 10:21:30.922147   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:30.922329   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.925105   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.925630   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.925663   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.925876   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.926107   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.926367   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.926536   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.926766   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.926931   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.926944   11941 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-883541 && echo "addons-883541" | sudo tee /etc/hostname
	I0812 10:21:31.039214   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-883541
	
	I0812 10:21:31.039241   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.042261   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.042624   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.042654   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.042923   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.043155   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.043317   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.043485   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.043638   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.043803   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.043818   11941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-883541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-883541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-883541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:21:31.149590   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:21:31.149628   11941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:21:31.149678   11941 buildroot.go:174] setting up certificates
	I0812 10:21:31.149695   11941 provision.go:84] configureAuth start
	I0812 10:21:31.149707   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:31.149976   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.152745   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.153272   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.153295   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.153520   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.156081   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.156397   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.156423   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.156599   11941 provision.go:143] copyHostCerts
	I0812 10:21:31.156675   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:21:31.156809   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:21:31.156925   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:21:31.157001   11941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.addons-883541 san=[127.0.0.1 192.168.39.215 addons-883541 localhost minikube]
	I0812 10:21:31.248717   11941 provision.go:177] copyRemoteCerts
	I0812 10:21:31.248773   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:21:31.248795   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.251420   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.251797   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.251819   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.252023   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.252199   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.252414   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.252563   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.331113   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:21:31.355053   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:21:31.378351   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 10:21:31.401215   11941 provision.go:87] duration metric: took 251.504934ms to configureAuth
	I0812 10:21:31.401246   11941 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:21:31.401453   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:21:31.401542   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.404516   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.404839   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.404885   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.405068   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.405299   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.405438   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.405579   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.405699   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.405853   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.405868   11941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:21:31.665706   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:21:31.665728   11941 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:21:31.665735   11941 main.go:141] libmachine: (addons-883541) Calling .GetURL
	I0812 10:21:31.667016   11941 main.go:141] libmachine: (addons-883541) DBG | Using libvirt version 6000000
	I0812 10:21:31.668924   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.669271   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.669298   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.669395   11941 main.go:141] libmachine: Docker is up and running!
	I0812 10:21:31.669411   11941 main.go:141] libmachine: Reticulating splines...
	I0812 10:21:31.669418   11941 client.go:171] duration metric: took 23.570613961s to LocalClient.Create
	I0812 10:21:31.669440   11941 start.go:167] duration metric: took 23.570674209s to libmachine.API.Create "addons-883541"
	I0812 10:21:31.669449   11941 start.go:293] postStartSetup for "addons-883541" (driver="kvm2")
	I0812 10:21:31.669458   11941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:21:31.669474   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.669741   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:21:31.669764   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.671960   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.672326   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.672359   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.672593   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.672809   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.672986   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.673127   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.751158   11941 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:21:31.755512   11941 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:21:31.755546   11941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:21:31.755621   11941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:21:31.755647   11941 start.go:296] duration metric: took 86.193416ms for postStartSetup
	I0812 10:21:31.755680   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:31.756321   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.758891   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.759214   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.759232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.759572   11941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json ...
	I0812 10:21:31.759819   11941 start.go:128] duration metric: took 23.679872598s to createHost
	I0812 10:21:31.759845   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.762441   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.762765   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.762794   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.762923   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.763161   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.763367   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.763543   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.763732   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.763896   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.763905   11941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:21:31.861590   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723458091.838434418
	
	I0812 10:21:31.861621   11941 fix.go:216] guest clock: 1723458091.838434418
	I0812 10:21:31.861632   11941 fix.go:229] Guest: 2024-08-12 10:21:31.838434418 +0000 UTC Remote: 2024-08-12 10:21:31.75983237 +0000 UTC m=+23.782995760 (delta=78.602048ms)
	I0812 10:21:31.861673   11941 fix.go:200] guest clock delta is within tolerance: 78.602048ms
	I0812 10:21:31.861689   11941 start.go:83] releasing machines lock for "addons-883541", held for 23.78183708s
	I0812 10:21:31.861720   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.861989   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.864913   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.865286   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.865316   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.865447   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.865896   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.866104   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.866242   11941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:21:31.866279   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.866340   11941 ssh_runner.go:195] Run: cat /version.json
	I0812 10:21:31.866365   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.869201   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869340   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869554   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.869589   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869689   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.869803   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.869825   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869864   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.869979   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.870043   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.870112   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.870181   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.870223   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.870351   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.941556   11941 ssh_runner.go:195] Run: systemctl --version
	I0812 10:21:31.984093   11941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:21:32.143967   11941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:21:32.150030   11941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:21:32.150098   11941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:21:32.165232   11941 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:21:32.165259   11941 start.go:495] detecting cgroup driver to use...
	I0812 10:21:32.165333   11941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:21:32.181149   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:21:32.195218   11941 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:21:32.195291   11941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:21:32.209540   11941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:21:32.223886   11941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:21:32.336364   11941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:21:32.477052   11941 docker.go:233] disabling docker service ...
	I0812 10:21:32.477125   11941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:21:32.490680   11941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:21:32.503560   11941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:21:32.638938   11941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:21:32.748297   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:21:32.762174   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:21:32.779947   11941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:21:32.780000   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.790168   11941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:21:32.790225   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.800410   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.810497   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.820384   11941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:21:32.830935   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.841148   11941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.857581   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.867677   11941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:21:32.877793   11941 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:21:32.877858   11941 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:21:32.891675   11941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:21:32.901886   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:21:33.012932   11941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:21:33.147893   11941 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:21:33.147981   11941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:21:33.152587   11941 start.go:563] Will wait 60s for crictl version
	I0812 10:21:33.152658   11941 ssh_runner.go:195] Run: which crictl
	I0812 10:21:33.156180   11941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:21:33.191537   11941 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:21:33.191670   11941 ssh_runner.go:195] Run: crio --version
	I0812 10:21:33.218953   11941 ssh_runner.go:195] Run: crio --version
	I0812 10:21:33.246760   11941 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:21:33.248440   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:33.251010   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:33.251400   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:33.251430   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:33.251688   11941 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:21:33.255824   11941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:21:33.268324   11941 kubeadm.go:883] updating cluster {Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:21:33.268424   11941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:21:33.268464   11941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:21:33.299877   11941 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 10:21:33.299939   11941 ssh_runner.go:195] Run: which lz4
	I0812 10:21:33.303751   11941 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 10:21:33.307521   11941 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 10:21:33.307554   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 10:21:34.570282   11941 crio.go:462] duration metric: took 1.266569953s to copy over tarball
	I0812 10:21:34.570348   11941 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 10:21:36.840842   11941 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.270465129s)
	I0812 10:21:36.840884   11941 crio.go:469] duration metric: took 2.270574682s to extract the tarball
	I0812 10:21:36.840895   11941 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 10:21:36.879419   11941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:21:36.919962   11941 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:21:36.919982   11941 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:21:36.919990   11941 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.30.3 crio true true} ...
	I0812 10:21:36.920098   11941 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-883541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:21:36.920166   11941 ssh_runner.go:195] Run: crio config
	I0812 10:21:36.965561   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:36.965580   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:36.965592   11941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:21:36.965620   11941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-883541 NodeName:addons-883541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:21:36.965751   11941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-883541"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:21:36.965808   11941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:21:36.974948   11941 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:21:36.975016   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 10:21:36.983862   11941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:21:36.999413   11941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:21:37.014760   11941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 10:21:37.030346   11941 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0812 10:21:37.033991   11941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:21:37.045109   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:21:37.153394   11941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:21:37.169392   11941 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541 for IP: 192.168.39.215
	I0812 10:21:37.169420   11941 certs.go:194] generating shared ca certs ...
	I0812 10:21:37.169441   11941 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.169616   11941 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:21:37.336443   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt ...
	I0812 10:21:37.336473   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt: {Name:mkbc3c098125ac3f2522015cca30de670fccd979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.336667   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key ...
	I0812 10:21:37.336681   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key: {Name:mkec40ed0841edc5c74ce2487e55b2bbbd544e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.336779   11941 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:21:37.389583   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt ...
	I0812 10:21:37.389612   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt: {Name:mk8633c1d66058e3480370fbf9bbb60bf08b3700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.389787   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key ...
	I0812 10:21:37.389801   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key: {Name:mk93371649518188ee90e0d9a0f5b731c74219a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.389895   11941 certs.go:256] generating profile certs ...
	I0812 10:21:37.389946   11941 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key
	I0812 10:21:37.389960   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt with IP's: []
	I0812 10:21:37.470209   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt ...
	I0812 10:21:37.470245   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: {Name:mk4bcb5ba14ae75cb3839a7116df1154e0ebaace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.470457   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key ...
	I0812 10:21:37.470474   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key: {Name:mk169e7849142fd205bf40be584d56d7a263eb48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.470590   11941 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01
	I0812 10:21:37.470613   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0812 10:21:37.601505   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 ...
	I0812 10:21:37.601538   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01: {Name:mk6549d9577dee251c862ca81280d8fa57a7529b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.601746   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01 ...
	I0812 10:21:37.601764   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01: {Name:mkc66ae42aef29b5d7d41ff23f8c94d434115cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.601886   11941 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt
	I0812 10:21:37.601990   11941 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key
	I0812 10:21:37.602053   11941 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key
	I0812 10:21:37.602074   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt with IP's: []
	I0812 10:21:37.791331   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt ...
	I0812 10:21:37.791366   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt: {Name:mk3a20dfcd3b1fcbad22d815696fb332aaf2298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.791559   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key ...
	I0812 10:21:37.791573   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key: {Name:mka2f9a0fd92892fd228d39da8655da0480feac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.791961   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:21:37.792117   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:21:37.792175   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:21:37.792208   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:21:37.793549   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:21:37.817267   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:21:37.842545   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:21:37.867098   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:21:37.889576   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 10:21:37.911732   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:21:37.934403   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:21:37.956784   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 10:21:37.979305   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:21:38.001271   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:21:38.017008   11941 ssh_runner.go:195] Run: openssl version
	I0812 10:21:38.022442   11941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:21:38.032927   11941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.036983   11941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.037045   11941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.042581   11941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:21:38.053007   11941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:21:38.056773   11941 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:21:38.056832   11941 kubeadm.go:392] StartCluster: {Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:21:38.056945   11941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:21:38.057004   11941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:21:38.091776   11941 cri.go:89] found id: ""
	I0812 10:21:38.091858   11941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 10:21:38.101385   11941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 10:21:38.110623   11941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 10:21:38.121912   11941 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 10:21:38.121931   11941 kubeadm.go:157] found existing configuration files:
	
	I0812 10:21:38.121986   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 10:21:38.131112   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 10:21:38.131173   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 10:21:38.142139   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 10:21:38.152654   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 10:21:38.152742   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 10:21:38.164200   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 10:21:38.174761   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 10:21:38.174821   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 10:21:38.184469   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 10:21:38.194358   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 10:21:38.194422   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 10:21:38.203398   11941 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 10:21:38.264756   11941 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 10:21:38.264815   11941 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 10:21:38.393967   11941 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 10:21:38.394109   11941 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 10:21:38.394250   11941 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 10:21:38.597040   11941 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 10:21:38.757951   11941 out.go:204]   - Generating certificates and keys ...
	I0812 10:21:38.758100   11941 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 10:21:38.758202   11941 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 10:21:38.758308   11941 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 10:21:38.772574   11941 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 10:21:38.902830   11941 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 10:21:39.056775   11941 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 10:21:39.104179   11941 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 10:21:39.104348   11941 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-883541 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0812 10:21:39.152735   11941 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 10:21:39.152926   11941 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-883541 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0812 10:21:39.314940   11941 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 10:21:39.455351   11941 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 10:21:39.629750   11941 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 10:21:39.630006   11941 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 10:21:39.918591   11941 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 10:21:39.994303   11941 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 10:21:40.096562   11941 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 10:21:40.220435   11941 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 10:21:40.286635   11941 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 10:21:40.287365   11941 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 10:21:40.289762   11941 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 10:21:40.291449   11941 out.go:204]   - Booting up control plane ...
	I0812 10:21:40.291551   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 10:21:40.291623   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 10:21:40.291724   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 10:21:40.306861   11941 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 10:21:40.307234   11941 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 10:21:40.307324   11941 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 10:21:40.429508   11941 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 10:21:40.429631   11941 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 10:21:40.931141   11941 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.943675ms
	I0812 10:21:40.931266   11941 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 10:21:46.430253   11941 kubeadm.go:310] [api-check] The API server is healthy after 5.502085251s
	I0812 10:21:46.452147   11941 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 10:21:46.471073   11941 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 10:21:46.518182   11941 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 10:21:46.518419   11941 kubeadm.go:310] [mark-control-plane] Marking the node addons-883541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 10:21:46.531891   11941 kubeadm.go:310] [bootstrap-token] Using token: cgb65i.d34ppi7ahda2k1m8
	I0812 10:21:46.533581   11941 out.go:204]   - Configuring RBAC rules ...
	I0812 10:21:46.533736   11941 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 10:21:46.544636   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 10:21:46.559355   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 10:21:46.563726   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 10:21:46.567640   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 10:21:46.573235   11941 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 10:21:46.839886   11941 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 10:21:47.286928   11941 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 10:21:47.837132   11941 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 10:21:47.837154   11941 kubeadm.go:310] 
	I0812 10:21:47.837208   11941 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 10:21:47.837215   11941 kubeadm.go:310] 
	I0812 10:21:47.837329   11941 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 10:21:47.837353   11941 kubeadm.go:310] 
	I0812 10:21:47.837402   11941 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 10:21:47.837488   11941 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 10:21:47.837572   11941 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 10:21:47.837581   11941 kubeadm.go:310] 
	I0812 10:21:47.837643   11941 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 10:21:47.837651   11941 kubeadm.go:310] 
	I0812 10:21:47.837709   11941 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 10:21:47.837719   11941 kubeadm.go:310] 
	I0812 10:21:47.837794   11941 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 10:21:47.837900   11941 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 10:21:47.838001   11941 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 10:21:47.838019   11941 kubeadm.go:310] 
	I0812 10:21:47.838103   11941 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 10:21:47.838188   11941 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 10:21:47.838202   11941 kubeadm.go:310] 
	I0812 10:21:47.838300   11941 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cgb65i.d34ppi7ahda2k1m8 \
	I0812 10:21:47.838446   11941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 10:21:47.838487   11941 kubeadm.go:310] 	--control-plane 
	I0812 10:21:47.838497   11941 kubeadm.go:310] 
	I0812 10:21:47.838587   11941 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 10:21:47.838595   11941 kubeadm.go:310] 
	I0812 10:21:47.838714   11941 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cgb65i.d34ppi7ahda2k1m8 \
	I0812 10:21:47.838877   11941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 10:21:47.839018   11941 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 10:21:47.839031   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:47.839037   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:47.841021   11941 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 10:21:47.842386   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 10:21:47.853384   11941 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 10:21:47.873701   11941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 10:21:47.873763   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:47.873838   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-883541 minikube.k8s.io/updated_at=2024_08_12T10_21_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=addons-883541 minikube.k8s.io/primary=true
	I0812 10:21:47.983993   11941 ops.go:34] apiserver oom_adj: -16
	I0812 10:21:47.984058   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:48.484945   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:48.984181   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:49.484311   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:49.984402   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:50.484980   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:50.984076   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:51.484794   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:51.985122   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:52.484335   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:52.985008   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:53.484934   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:53.985003   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:54.485031   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:54.984845   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:55.484834   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:55.984280   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:56.484135   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:56.984721   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:57.484825   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:57.985129   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:58.485000   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:58.984389   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:59.484333   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:59.984253   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:22:00.484116   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:22:00.578666   11941 kubeadm.go:1113] duration metric: took 12.704955754s to wait for elevateKubeSystemPrivileges
	I0812 10:22:00.578700   11941 kubeadm.go:394] duration metric: took 22.521872839s to StartCluster
	I0812 10:22:00.578723   11941 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:22:00.578841   11941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:22:00.579253   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:22:00.579460   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 10:22:00.579490   11941 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:22:00.579562   11941 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0812 10:22:00.579688   11941 addons.go:69] Setting yakd=true in profile "addons-883541"
	I0812 10:22:00.579704   11941 addons.go:69] Setting inspektor-gadget=true in profile "addons-883541"
	I0812 10:22:00.579712   11941 addons.go:69] Setting storage-provisioner=true in profile "addons-883541"
	I0812 10:22:00.579729   11941 addons.go:234] Setting addon yakd=true in "addons-883541"
	I0812 10:22:00.579736   11941 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-883541"
	I0812 10:22:00.579746   11941 addons.go:69] Setting cloud-spanner=true in profile "addons-883541"
	I0812 10:22:00.579749   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:22:00.579758   11941 addons.go:234] Setting addon storage-provisioner=true in "addons-883541"
	I0812 10:22:00.579762   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579765   11941 addons.go:234] Setting addon cloud-spanner=true in "addons-883541"
	I0812 10:22:00.579769   11941 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-883541"
	I0812 10:22:00.579791   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579807   11941 addons.go:69] Setting helm-tiller=true in profile "addons-883541"
	I0812 10:22:00.579812   11941 addons.go:69] Setting default-storageclass=true in profile "addons-883541"
	I0812 10:22:00.579826   11941 addons.go:234] Setting addon helm-tiller=true in "addons-883541"
	I0812 10:22:00.579834   11941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-883541"
	I0812 10:22:00.579849   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579798   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580178   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580187   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580208   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580221   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580238   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580296   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580328   11941 addons.go:69] Setting registry=true in profile "addons-883541"
	I0812 10:22:00.579720   11941 addons.go:69] Setting volcano=true in profile "addons-883541"
	I0812 10:22:00.580358   11941 addons.go:234] Setting addon registry=true in "addons-883541"
	I0812 10:22:00.579800   11941 addons.go:69] Setting gcp-auth=true in profile "addons-883541"
	I0812 10:22:00.579740   11941 addons.go:234] Setting addon inspektor-gadget=true in "addons-883541"
	I0812 10:22:00.580361   11941 addons.go:234] Setting addon volcano=true in "addons-883541"
	I0812 10:22:00.580372   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580379   11941 mustload.go:65] Loading cluster: addons-883541
	I0812 10:22:00.579807   11941 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-883541"
	I0812 10:22:00.580385   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580304   11941 addons.go:69] Setting ingress=true in profile "addons-883541"
	I0812 10:22:00.580423   11941 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-883541"
	I0812 10:22:00.580307   11941 addons.go:69] Setting ingress-dns=true in profile "addons-883541"
	I0812 10:22:00.580438   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580316   11941 addons.go:69] Setting volumesnapshots=true in profile "addons-883541"
	I0812 10:22:00.580459   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580464   11941 addons.go:234] Setting addon volumesnapshots=true in "addons-883541"
	I0812 10:22:00.580328   11941 addons.go:69] Setting metrics-server=true in profile "addons-883541"
	I0812 10:22:00.580484   11941 addons.go:234] Setting addon metrics-server=true in "addons-883541"
	I0812 10:22:00.580319   11941 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-883541"
	I0812 10:22:00.580441   11941 addons.go:234] Setting addon ingress-dns=true in "addons-883541"
	I0812 10:22:00.580501   11941 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-883541"
	I0812 10:22:00.580446   11941 addons.go:234] Setting addon ingress=true in "addons-883541"
	I0812 10:22:00.580557   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580588   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580593   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580921   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580926   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580947   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580947   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580965   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580976   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580990   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580996   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581063   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581174   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581303   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581328   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581361   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581376   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581405   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580922   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581383   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581465   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581472   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581490   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581632   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581674   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581869   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581903   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.582020   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:22:00.588973   11941 out.go:177] * Verifying Kubernetes components...
	I0812 10:22:00.593220   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:22:00.600836   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0812 10:22:00.600850   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0812 10:22:00.601177   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0812 10:22:00.601325   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.601474   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.602016   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.602038   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.602160   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.602175   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.602376   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.602847   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0812 10:22:00.602944   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.602973   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.602987   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.603067   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0812 10:22:00.603262   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.603365   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.603885   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.603905   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.603961   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.604102   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.604118   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.604473   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.604487   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.604530   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.604571   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.615057   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0812 10:22:00.615171   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0812 10:22:00.621154   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.621362   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621432   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621441   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621466   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621762   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621787   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621811   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621812   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621903   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621915   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.622072   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0812 10:22:00.622217   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.622243   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.623221   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.623324   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.623398   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.629356   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629371   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629384   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.629391   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.629560   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629574   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.630326   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630405   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630435   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630895   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.630936   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.631428   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.631448   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.631492   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.631525   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.659083   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0812 10:22:00.659802   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.660454   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.660477   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.660889   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.661097   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.661666   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0812 10:22:00.662146   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.662701   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.662718   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.662839   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0812 10:22:00.663147   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.663214   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.663234   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.663291   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I0812 10:22:00.663941   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.663975   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.664210   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 10:22:00.664223   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.664283   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0812 10:22:00.664744   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.664761   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.664807   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.664950   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0812 10:22:00.665191   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.665260   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.665414   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.665435   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.665633   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.665777   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.665797   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.666112   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.666301   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.667061   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0812 10:22:00.667230   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.667663   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.667682   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.667764   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0812 10:22:00.669111   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.669189   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.669744   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.670725   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.670744   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.670814   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I0812 10:22:00.671009   11941 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-883541"
	I0812 10:22:00.671046   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.671110   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0812 10:22:00.671408   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.671441   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.671442   11941 addons.go:234] Setting addon default-storageclass=true in "addons-883541"
	I0812 10:22:00.671474   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.671624   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.671733   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.671829   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.671850   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.672044   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.672058   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0812 10:22:00.672064   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.672373   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.672859   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.672931   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.673245   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.675255   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0812 10:22:00.675580   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.675893   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.676040   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.676513   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.676938   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.677246   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.677287   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.677568   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.677603   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.678524   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0812 10:22:00.678697   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0812 10:22:00.678754   11941 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0812 10:22:00.678771   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0812 10:22:00.679081   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0812 10:22:00.679197   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.679628   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.679691   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.679713   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.680191   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.680260   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.680277   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.680335   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 10:22:00.680349   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0812 10:22:00.680386   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.680456   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.680548   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35073
	I0812 10:22:00.680750   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.681168   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.681199   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.681716   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0812 10:22:00.681755   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.681775   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.681844   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.681915   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.681930   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.682439   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.682457   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.682861   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.683002   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.683015   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.683271   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.683837   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.683898   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.684134   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0812 10:22:00.684212   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.684294   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:00.684311   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:00.686262   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:00.686261   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.686284   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:00.686294   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:00.686306   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:00.686313   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:00.686333   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.686623   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:00.686649   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:00.686656   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:00.686685   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W0812 10:22:00.686709   11941 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0812 10:22:00.687045   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.687274   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.687351   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.687548   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.687763   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.687849   11941 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0812 10:22:00.687880   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 10:22:00.687899   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 10:22:00.687918   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.687962   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.688080   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.688485   11941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 10:22:00.688585   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0812 10:22:00.689095   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0812 10:22:00.689113   11941 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0812 10:22:00.689130   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.689333   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.689961   11941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:22:00.689981   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 10:22:00.689998   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.690468   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.690486   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.692329   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.692814   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.692833   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.693033   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.693214   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.693421   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.693562   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.693985   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695057   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.695079   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695105   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695489   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.695517   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695520   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.695889   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.695928   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.696216   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.696262   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.696542   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.696822   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.697025   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.697240   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.697438   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.702140   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.704301   11941 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0812 10:22:00.706177   11941 out.go:177]   - Using image docker.io/registry:2.8.3
	I0812 10:22:00.706822   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0812 10:22:00.707437   11941 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0812 10:22:00.707458   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0812 10:22:00.707479   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.707438   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.707989   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.708006   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.708383   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.708580   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.711015   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.711476   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.711902   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.711932   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.712097   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.712319   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.712576   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.712649   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0812 10:22:00.712988   11941 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0812 10:22:00.713148   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0812 10:22:00.713172   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.713567   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.714067   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.714093   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.714433   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.714488   11941 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 10:22:00.714508   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0812 10:22:00.714524   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.714572   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.716702   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.717148   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.717193   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0812 10:22:00.717625   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.717687   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.717785   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.718171   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.718298   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.718313   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.718751   11941 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0812 10:22:00.718972   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.719003   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.719177   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.720553   11941 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0812 10:22:00.720571   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0812 10:22:00.720589   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.720666   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.720714   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.720736   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.720929   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.721035   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.721184   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.721353   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.722425   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0812 10:22:00.723117   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.723849   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.723867   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.724099   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.724154   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.724699   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.724720   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.725040   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.725065   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.725105   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0812 10:22:00.725205   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0812 10:22:00.725425   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.725584   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.725597   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.725801   11941 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0812 10:22:00.725881   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.726056   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.726814   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.727006   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.727036   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.727050   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.727303   11941 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0812 10:22:00.727320   11941 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0812 10:22:00.727345   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.727599   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.728199   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.728224   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.728406   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.728434   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.728715   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0812 10:22:00.728844   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0812 10:22:00.729216   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.729265   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.729446   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.729792   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0812 10:22:00.730060   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.730078   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.730136   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.730598   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.730618   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.730629   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.730989   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.731122   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:00.731192   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.731427   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.731474   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.733200   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.733680   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.733708   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.733775   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:00.733880   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.733949   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.734150   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.734191   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.734371   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.734719   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.735351   11941 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 10:22:00.735373   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0812 10:22:00.735400   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.736131   11941 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0812 10:22:00.736209   11941 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0812 10:22:00.737592   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0812 10:22:00.737782   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 10:22:00.737794   11941 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 10:22:00.737812   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.737966   11941 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 10:22:00.737974   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0812 10:22:00.737988   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.741257   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.741340   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742280   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742284   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.742315   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742331   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742354   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742390   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.742404   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.742513   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.742568   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742584   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742704   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742720   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742747   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.742790   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.742950   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.742954   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.743007   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.743084   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.743198   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.743275   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.743417   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.743539   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.744173   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.744360   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.745878   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	W0812 10:22:00.747149   11941 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56324->192.168.39.215:22: read: connection reset by peer
	I0812 10:22:00.747175   11941 retry.go:31] will retry after 353.172764ms: ssh: handshake failed: read tcp 192.168.39.1:56324->192.168.39.215:22: read: connection reset by peer
	I0812 10:22:00.747812   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0812 10:22:00.749091   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 10:22:00.749111   11941 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 10:22:00.749133   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.752125   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.752300   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0812 10:22:00.752492   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.752507   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.752817   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.753046   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.753146   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.753197   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.753334   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.753657   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.753669   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.753893   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0812 10:22:00.754041   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.754187   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.754223   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.754616   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.754627   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.755458   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.755637   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.755675   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.755820   11941 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 10:22:00.755829   11941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 10:22:00.755838   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.757456   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.759121   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.759257   11941 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0812 10:22:00.759507   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.759527   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.759716   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.759863   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.759971   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.760123   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.762040   11941 out.go:177]   - Using image docker.io/busybox:stable
	I0812 10:22:00.763341   11941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 10:22:00.763355   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0812 10:22:00.763371   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.766185   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.766508   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.766522   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.766673   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.766799   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.766888   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.766975   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:01.026133   11941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:22:01.026235   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 10:22:01.050165   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 10:22:01.061852   11941 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0812 10:22:01.061880   11941 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0812 10:22:01.156461   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:22:01.183821   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:22:01.195902   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0812 10:22:01.195921   11941 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0812 10:22:01.197772   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 10:22:01.197787   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0812 10:22:01.205568   11941 node_ready.go:35] waiting up to 6m0s for node "addons-883541" to be "Ready" ...
	I0812 10:22:01.210492   11941 node_ready.go:49] node "addons-883541" has status "Ready":"True"
	I0812 10:22:01.210514   11941 node_ready.go:38] duration metric: took 4.9219ms for node "addons-883541" to be "Ready" ...
	I0812 10:22:01.210523   11941 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:22:01.222513   11941 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:01.243720   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0812 10:22:01.260523   11941 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0812 10:22:01.260541   11941 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0812 10:22:01.346995   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 10:22:01.347011   11941 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 10:22:01.350941   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 10:22:01.364618   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 10:22:01.364640   11941 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0812 10:22:01.387333   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 10:22:01.399066   11941 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0812 10:22:01.399091   11941 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0812 10:22:01.409621   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0812 10:22:01.409650   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0812 10:22:01.420126   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0812 10:22:01.420146   11941 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0812 10:22:01.444122   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0812 10:22:01.444144   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0812 10:22:01.501996   11941 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0812 10:22:01.502018   11941 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0812 10:22:01.561076   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 10:22:01.565443   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 10:22:01.565462   11941 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 10:22:01.606773   11941 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0812 10:22:01.606791   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0812 10:22:01.622885   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0812 10:22:01.622912   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0812 10:22:01.640422   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0812 10:22:01.640446   11941 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0812 10:22:01.666855   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0812 10:22:01.666879   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0812 10:22:01.745577   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 10:22:01.748032   11941 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0812 10:22:01.748056   11941 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0812 10:22:01.770990   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 10:22:01.831033   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0812 10:22:01.831055   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0812 10:22:01.831548   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0812 10:22:01.831565   11941 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0812 10:22:01.865924   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0812 10:22:01.875437   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0812 10:22:01.875460   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0812 10:22:01.918434   11941 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0812 10:22:01.918454   11941 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0812 10:22:01.971727   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0812 10:22:01.971755   11941 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0812 10:22:02.035261   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0812 10:22:02.035291   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0812 10:22:02.080530   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0812 10:22:02.080558   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0812 10:22:02.223415   11941 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0812 10:22:02.223443   11941 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0812 10:22:02.304629   11941 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:02.304648   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0812 10:22:02.312743   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0812 10:22:02.462427   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0812 10:22:02.462451   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0812 10:22:02.548822   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:02.553125   11941 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 10:22:02.553148   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0812 10:22:02.772018   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 10:22:02.828012   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0812 10:22:02.828046   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0812 10:22:03.063831   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0812 10:22:03.063859   11941 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0812 10:22:03.117762   11941 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.091492661s)
	I0812 10:22:03.117800   11941 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 10:22:03.256782   11941 pod_ready.go:102] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"False"
	I0812 10:22:03.467068   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0812 10:22:03.467090   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0812 10:22:03.640785   11941 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-883541" context rescaled to 1 replicas
	I0812 10:22:03.794407   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0812 10:22:03.794427   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0812 10:22:04.056170   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 10:22:04.056190   11941 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0812 10:22:04.266658   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.216455276s)
	I0812 10:22:04.266695   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.11020412s)
	I0812 10:22:04.266714   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.266729   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.266716   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.266797   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267068   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267088   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.267097   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.267105   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267126   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:04.267163   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267180   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.267195   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.267206   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267384   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267422   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.268779   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:04.268798   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.268818   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.313767   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.313795   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.314036   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.314050   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.382426   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 10:22:05.285276   11941 pod_ready.go:102] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"False"
	I0812 10:22:05.960658   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.776802127s)
	I0812 10:22:05.960707   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960718   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.960730   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.609763628s)
	I0812 10:22:05.960754   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960659   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.716901839s)
	I0812 10:22:05.960766   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.960794   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960811   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.961095   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.961113   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.961124   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.961133   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962928   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.962932   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.962959   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.962965   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.962968   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.962963   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.962976   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.962984   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962934   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963038   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963047   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.963053   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962937   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.963290   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963307   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963336   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.963345   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963356   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963368   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.983517   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.983540   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.983894   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.983914   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:07.484733   11941 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.484755   11941 pod_ready.go:81] duration metric: took 6.262207003s for pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.484789   11941 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.603775   11941 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.603809   11941 pod_ready.go:81] duration metric: took 119.011289ms for pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.603823   11941 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.710519   11941 pod_ready.go:92] pod "etcd-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.710546   11941 pod_ready.go:81] duration metric: took 106.712142ms for pod "etcd-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.710558   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.742447   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0812 10:22:07.742494   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:07.745738   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:07.746200   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:07.746232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:07.746413   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:07.746646   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:07.746816   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:07.746980   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:07.755898   11941 pod_ready.go:92] pod "kube-apiserver-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.755920   11941 pod_ready.go:81] duration metric: took 45.354609ms for pod "kube-apiserver-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.755929   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.785589   11941 pod_ready.go:92] pod "kube-controller-manager-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.785622   11941 pod_ready.go:81] duration metric: took 29.685304ms for pod "kube-controller-manager-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.785637   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dswsl" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.900844   11941 pod_ready.go:92] pod "kube-proxy-dswsl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.900900   11941 pod_ready.go:81] duration metric: took 115.255004ms for pod "kube-proxy-dswsl" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.900914   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.937148   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0812 10:22:08.041377   11941 addons.go:234] Setting addon gcp-auth=true in "addons-883541"
	I0812 10:22:08.041423   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:08.041770   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:08.041799   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:08.043553   11941 pod_ready.go:92] pod "kube-scheduler-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:08.043568   11941 pod_ready.go:81] duration metric: took 142.64749ms for pod "kube-scheduler-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:08.043576   11941 pod_ready.go:38] duration metric: took 6.833038323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:22:08.043596   11941 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:22:08.043642   11941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:22:08.057062   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0812 10:22:08.057545   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:08.058073   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:08.058095   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:08.058432   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:08.059028   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:08.059064   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:08.075447   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0812 10:22:08.075954   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:08.076404   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:08.076426   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:08.076734   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:08.076955   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:08.078695   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:08.078963   11941 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0812 10:22:08.078990   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:08.081673   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:08.082054   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:08.082083   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:08.082240   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:08.082425   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:08.082552   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:08.082694   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:09.334248   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.946883426s)
	I0812 10:22:09.334261   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.773154894s)
	I0812 10:22:09.334294   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334309   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334371   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334383   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334368   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.588748326s)
	I0812 10:22:09.334412   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.563388221s)
	I0812 10:22:09.334445   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.468492432s)
	I0812 10:22:09.334460   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334467   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334475   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334481   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334490   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.02172108s)
	I0812 10:22:09.334516   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334529   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334535   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334544   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334678   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.785820296s)
	W0812 10:22:09.334710   11941 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 10:22:09.334737   11941 retry.go:31] will retry after 355.3481ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 10:22:09.334820   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.562764327s)
	I0812 10:22:09.334826   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334842   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334842   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334852   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334861   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334866   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334869   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334874   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334882   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334887   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334891   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334896   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334899   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334905   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334913   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334914   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334918   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334927   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334936   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334944   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334937   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334971   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.335375   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.335401   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.335408   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.335417   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.335424   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.335470   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.335488   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.335494   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.335503   11941 addons.go:475] Verifying addon ingress=true in "addons-883541"
	I0812 10:22:09.336042   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.336052   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.336063   11941 addons.go:475] Verifying addon registry=true in "addons-883541"
	I0812 10:22:09.336162   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.336182   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.336187   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337143   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337158   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337406   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337416   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337424   11941 addons.go:475] Verifying addon metrics-server=true in "addons-883541"
	I0812 10:22:09.337724   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337735   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337744   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.337752   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.337857   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.337886   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337892   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337910   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.337917   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.337983   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.338014   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.338020   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.338547   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.338600   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.338621   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.338685   11941 out.go:177] * Verifying ingress addon...
	I0812 10:22:09.339658   11941 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-883541 service yakd-dashboard -n yakd-dashboard
	
	I0812 10:22:09.339692   11941 out.go:177] * Verifying registry addon...
	I0812 10:22:09.341444   11941 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0812 10:22:09.341845   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0812 10:22:09.363895   11941 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0812 10:22:09.363927   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:09.369309   11941 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0812 10:22:09.369340   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:09.691282   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:09.848385   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:09.851757   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:10.398058   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:10.402272   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:10.862201   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:10.866803   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:11.050974   11941 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.007308719s)
	I0812 10:22:11.051011   11941 api_server.go:72] duration metric: took 10.471491866s to wait for apiserver process to appear ...
	I0812 10:22:11.051018   11941 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:22:11.051035   11941 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0812 10:22:11.051034   11941 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.972053482s)
	I0812 10:22:11.050977   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.668501911s)
	I0812 10:22:11.051135   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.051159   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.051512   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.051531   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.051542   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.051555   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.051792   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.051809   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.051820   11941 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-883541"
	I0812 10:22:11.052641   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:11.053711   11941 out.go:177] * Verifying csi-hostpath-driver addon...
	I0812 10:22:11.055075   11941 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0812 10:22:11.055766   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0812 10:22:11.056443   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0812 10:22:11.056464   11941 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0812 10:22:11.060247   11941 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0812 10:22:11.061638   11941 api_server.go:141] control plane version: v1.30.3
	I0812 10:22:11.061659   11941 api_server.go:131] duration metric: took 10.636343ms to wait for apiserver health ...
	I0812 10:22:11.061667   11941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:22:11.078712   11941 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0812 10:22:11.078736   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:11.112148   11941 system_pods.go:59] 19 kube-system pods found
	I0812 10:22:11.112180   11941 system_pods.go:61] "coredns-7db6d8ff4d-jn9jq" [951e2ef7-fcae-4716-baa6-a6165ab20cc7] Running
	I0812 10:22:11.112184   11941 system_pods.go:61] "coredns-7db6d8ff4d-vgg6r" [d2d3a2bf-c74b-4317-96a2-2a4917a45e7e] Running
	I0812 10:22:11.112191   11941 system_pods.go:61] "csi-hostpath-attacher-0" [dc2cf19a-dc76-4980-a455-ca84123661e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0812 10:22:11.112195   11941 system_pods.go:61] "csi-hostpath-resizer-0" [78cfc69c-952e-4d16-b8db-047b7ee663ed] Pending
	I0812 10:22:11.112203   11941 system_pods.go:61] "csi-hostpathplugin-pbz4r" [af18ae79-821d-4b0c-9bac-9e1a015ba81c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0812 10:22:11.112207   11941 system_pods.go:61] "etcd-addons-883541" [7c24dcbb-833e-4d32-ad2d-8fae7badf7ae] Running
	I0812 10:22:11.112212   11941 system_pods.go:61] "kube-apiserver-addons-883541" [6e96bb86-808a-4824-9902-9e19d71d23ef] Running
	I0812 10:22:11.112216   11941 system_pods.go:61] "kube-controller-manager-addons-883541" [52bf2c7b-b7f4-4be1-8c6b-6482400096bb] Running
	I0812 10:22:11.112220   11941 system_pods.go:61] "kube-ingress-dns-minikube" [06067b49-111f-4363-8bb3-2007070757ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0812 10:22:11.112224   11941 system_pods.go:61] "kube-proxy-dswsl" [73a29712-f2b7-4371-a3f3-9920d0a4bde5] Running
	I0812 10:22:11.112227   11941 system_pods.go:61] "kube-scheduler-addons-883541" [c4f4ad69-850f-4301-a8dd-21633ca63ca4] Running
	I0812 10:22:11.112231   11941 system_pods.go:61] "metrics-server-c59844bb4-j7r9p" [64cd8192-55f2-4d23-8337-068eddc6126c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 10:22:11.112238   11941 system_pods.go:61] "nvidia-device-plugin-daemonset-r9hqx" [12e175a3-9d78-4c03-af1e-0b8ed635e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0812 10:22:11.112244   11941 system_pods.go:61] "registry-698f998955-xww5t" [bd991983-9d87-471c-b2ac-7cae341f9d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 10:22:11.112249   11941 system_pods.go:61] "registry-proxy-8xczh" [7f708cb9-ae7f-4021-be11-218df27928d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 10:22:11.112254   11941 system_pods.go:61] "snapshot-controller-745499f584-4gwxm" [ee6f839c-444d-4c56-b476-f5a81329f5fc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.112260   11941 system_pods.go:61] "snapshot-controller-745499f584-mmlfj" [cacd9827-23a1-4a79-8983-9fb972a22964] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.112264   11941 system_pods.go:61] "storage-provisioner" [54a9610b-ab55-47f3-943c-2c6f54430fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 10:22:11.112270   11941 system_pods.go:61] "tiller-deploy-6677d64bcd-45ft9" [87ea7eab-fd15-420a-ad1a-20231ebf7ba3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 10:22:11.112278   11941 system_pods.go:74] duration metric: took 50.607016ms to wait for pod list to return data ...
	I0812 10:22:11.112286   11941 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:22:11.121157   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0812 10:22:11.121182   11941 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0812 10:22:11.125160   11941 default_sa.go:45] found service account: "default"
	I0812 10:22:11.125183   11941 default_sa.go:55] duration metric: took 12.89161ms for default service account to be created ...
	I0812 10:22:11.125195   11941 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:22:11.146584   11941 system_pods.go:86] 19 kube-system pods found
	I0812 10:22:11.146638   11941 system_pods.go:89] "coredns-7db6d8ff4d-jn9jq" [951e2ef7-fcae-4716-baa6-a6165ab20cc7] Running
	I0812 10:22:11.146647   11941 system_pods.go:89] "coredns-7db6d8ff4d-vgg6r" [d2d3a2bf-c74b-4317-96a2-2a4917a45e7e] Running
	I0812 10:22:11.146658   11941 system_pods.go:89] "csi-hostpath-attacher-0" [dc2cf19a-dc76-4980-a455-ca84123661e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0812 10:22:11.146666   11941 system_pods.go:89] "csi-hostpath-resizer-0" [78cfc69c-952e-4d16-b8db-047b7ee663ed] Pending
	I0812 10:22:11.146680   11941 system_pods.go:89] "csi-hostpathplugin-pbz4r" [af18ae79-821d-4b0c-9bac-9e1a015ba81c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0812 10:22:11.146691   11941 system_pods.go:89] "etcd-addons-883541" [7c24dcbb-833e-4d32-ad2d-8fae7badf7ae] Running
	I0812 10:22:11.146698   11941 system_pods.go:89] "kube-apiserver-addons-883541" [6e96bb86-808a-4824-9902-9e19d71d23ef] Running
	I0812 10:22:11.146705   11941 system_pods.go:89] "kube-controller-manager-addons-883541" [52bf2c7b-b7f4-4be1-8c6b-6482400096bb] Running
	I0812 10:22:11.146716   11941 system_pods.go:89] "kube-ingress-dns-minikube" [06067b49-111f-4363-8bb3-2007070757ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0812 10:22:11.146728   11941 system_pods.go:89] "kube-proxy-dswsl" [73a29712-f2b7-4371-a3f3-9920d0a4bde5] Running
	I0812 10:22:11.146738   11941 system_pods.go:89] "kube-scheduler-addons-883541" [c4f4ad69-850f-4301-a8dd-21633ca63ca4] Running
	I0812 10:22:11.146751   11941 system_pods.go:89] "metrics-server-c59844bb4-j7r9p" [64cd8192-55f2-4d23-8337-068eddc6126c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 10:22:11.146763   11941 system_pods.go:89] "nvidia-device-plugin-daemonset-r9hqx" [12e175a3-9d78-4c03-af1e-0b8ed635e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0812 10:22:11.146777   11941 system_pods.go:89] "registry-698f998955-xww5t" [bd991983-9d87-471c-b2ac-7cae341f9d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 10:22:11.146789   11941 system_pods.go:89] "registry-proxy-8xczh" [7f708cb9-ae7f-4021-be11-218df27928d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 10:22:11.146801   11941 system_pods.go:89] "snapshot-controller-745499f584-4gwxm" [ee6f839c-444d-4c56-b476-f5a81329f5fc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.146814   11941 system_pods.go:89] "snapshot-controller-745499f584-mmlfj" [cacd9827-23a1-4a79-8983-9fb972a22964] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.146823   11941 system_pods.go:89] "storage-provisioner" [54a9610b-ab55-47f3-943c-2c6f54430fdc] Running
	I0812 10:22:11.146834   11941 system_pods.go:89] "tiller-deploy-6677d64bcd-45ft9" [87ea7eab-fd15-420a-ad1a-20231ebf7ba3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 10:22:11.146846   11941 system_pods.go:126] duration metric: took 21.645227ms to wait for k8s-apps to be running ...
	I0812 10:22:11.146860   11941 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:22:11.146916   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:22:11.172515   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 10:22:11.172546   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0812 10:22:11.234994   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 10:22:11.345459   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:11.348627   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:11.561914   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:11.614520   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.923179123s)
	I0812 10:22:11.614583   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.614601   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.614581   11941 system_svc.go:56] duration metric: took 467.714276ms WaitForService to wait for kubelet
	I0812 10:22:11.614676   11941 kubeadm.go:582] duration metric: took 11.035148966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:22:11.614711   11941 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:22:11.614983   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:11.615030   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.615039   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.615051   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.615058   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.615278   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:11.615305   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.615329   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.617990   11941 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:22:11.618020   11941 node_conditions.go:123] node cpu capacity is 2
	I0812 10:22:11.618034   11941 node_conditions.go:105] duration metric: took 3.316232ms to run NodePressure ...
	I0812 10:22:11.618046   11941 start.go:241] waiting for startup goroutines ...
	I0812 10:22:11.847439   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:11.856525   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:12.065206   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:12.360054   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:12.360299   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:12.551544   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.316515504s)
	I0812 10:22:12.551592   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:12.551608   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:12.551988   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:12.552008   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:12.552014   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:12.552024   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:12.552033   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:12.552268   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:12.552281   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:12.554255   11941 addons.go:475] Verifying addon gcp-auth=true in "addons-883541"
	I0812 10:22:12.556104   11941 out.go:177] * Verifying gcp-auth addon...
	I0812 10:22:12.558417   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0812 10:22:12.615188   11941 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0812 10:22:12.615217   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:12.615426   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:12.857143   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:12.863141   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.061270   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:13.068029   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:13.347799   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:13.349134   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.563394   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:13.566137   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:13.849177   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.850869   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.062258   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:14.062685   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:14.347190   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.350208   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:14.561370   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:14.562720   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:14.846339   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.847264   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.069939   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:15.071281   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:15.348272   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.348558   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:15.563161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:15.565234   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:15.847222   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.850590   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.060883   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:16.062560   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:16.345734   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.348660   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:16.574487   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:16.582612   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:16.845668   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.847153   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:17.177706   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:17.179919   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:17.347431   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:17.349577   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:17.562122   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:17.564245   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:17.847846   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:17.849682   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.062715   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:18.063430   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:18.347456   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:18.348767   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.561715   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:18.563390   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:18.847032   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.847094   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.061231   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:19.062246   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:19.346918   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:19.347082   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.561658   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:19.561942   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:19.845930   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.846391   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:20.061581   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:20.063431   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:20.346070   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:20.346721   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:20.561733   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:20.563548   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:20.846747   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:20.847101   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:21.061871   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:21.062776   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:21.347783   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:21.347783   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:21.561803   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:21.562824   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:21.945454   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:21.946537   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.061040   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:22.062644   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:22.345191   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:22.348474   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.562204   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:22.562798   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:22.847494   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.848036   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.063064   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:23.063493   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:23.347386   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:23.347461   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.562241   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:23.562818   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:23.847031   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.847796   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:24.076249   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:24.076739   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:24.347179   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:24.348238   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:24.564497   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:24.564652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:24.848702   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:24.851510   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.062384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:25.063537   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:25.346478   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:25.346673   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.561848   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:25.563059   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:25.848000   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.848530   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.061874   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:26.063621   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:26.346055   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.347790   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:26.565960   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:26.566389   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:26.846365   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.847333   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.061039   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:27.061499   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:27.346496   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:27.346698   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.561230   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:27.562843   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:27.847820   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.847880   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:28.062161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:28.063434   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:28.346681   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:28.348134   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:28.561251   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:28.562186   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:28.847819   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:28.847950   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.061356   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:29.062972   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:29.347284   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.348381   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:29.562348   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:29.564139   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:29.845303   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.848217   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.061130   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:30.063011   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:30.351747   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.352539   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:30.562826   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:30.564120   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:30.864169   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.865263   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.062658   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:31.063455   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:31.349897   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:31.351523   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.561243   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:31.563182   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:31.845792   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.846926   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.063169   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:32.064578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:32.345701   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:32.347668   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.563013   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:32.566190   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:32.846662   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.848106   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.061886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:33.062636   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:33.348028   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:33.348329   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.561184   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:33.564331   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:33.847207   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.847607   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:34.061618   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:34.061992   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:34.347244   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:34.347336   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:34.561205   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:34.562276   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:34.846728   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:34.847917   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:35.062479   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:35.064609   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:35.348300   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:35.349782   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:35.561454   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:35.562896   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:35.848296   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:35.848448   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.061469   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:36.063313   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:36.346554   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:36.347384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.561418   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:36.562201   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:36.847362   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.848451   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:37.061299   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:37.062196   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:37.348112   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:37.348419   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:37.561300   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:37.562673   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:37.859031   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:37.859260   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.061310   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:38.062454   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:38.347017   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.348576   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:38.568045   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:38.568527   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:38.847346   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.847778   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.062899   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:39.066484   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:39.346397   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:39.346749   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.563053   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:39.563400   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:39.846700   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.846831   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:40.066776   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:40.066838   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:40.346181   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:40.347404   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:40.562032   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:40.562648   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:40.847639   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:40.848213   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.061922   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:41.062318   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:41.347894   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:41.348190   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.562110   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:41.563014   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:41.846934   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.847429   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:42.061344   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:42.061372   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:42.346819   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:42.347270   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:42.561297   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:42.562375   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:42.846977   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:42.847464   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:43.061613   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:43.061888   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:43.347559   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:43.347762   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:43.562185   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:43.564224   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:43.845364   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:43.847520   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.061075   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:44.063932   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:44.347754   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:44.348813   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.561397   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:44.563086   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:44.846872   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.849076   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.062734   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:45.063040   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:45.348056   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:45.348957   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.561436   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:45.563351   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:45.846992   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.847002   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:46.061797   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:46.064393   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:46.355471   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:46.355914   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:46.736577   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:46.750625   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:46.846164   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:46.846259   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.060652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:47.062397   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:47.347970   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.348275   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:47.560632   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:47.562212   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:47.847049   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.847397   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.063829   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:48.064671   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:48.349173   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.349838   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:48.562449   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:48.563086   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:48.846730   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.847202   11941 kapi.go:107] duration metric: took 39.505356388s to wait for kubernetes.io/minikube-addons=registry ...
	I0812 10:22:49.061058   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:49.063005   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:49.346247   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:49.561711   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:49.561978   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:49.846927   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:50.061747   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:50.061805   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:50.345842   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:50.561573   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:50.563422   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:50.845738   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:51.060794   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:51.062051   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:51.345847   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:51.562844   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:51.564043   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:51.845860   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:52.062577   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:52.062731   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:52.345513   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:52.560692   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:52.562193   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:52.848827   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:53.061166   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:53.061578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:53.346606   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:53.561675   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:53.563355   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:53.846689   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:54.061178   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:54.062280   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:54.347560   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:54.561186   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:54.562938   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:54.845957   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:55.062108   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:55.062439   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:55.348421   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:55.562038   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:55.563972   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:55.846165   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:56.061578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:56.062385   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:56.346454   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:56.561952   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:56.562584   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:56.846272   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:57.064041   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:57.066487   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:57.346271   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:57.561349   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:57.562453   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:57.845210   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:58.061412   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:58.064647   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:58.346400   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:58.561638   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:58.562778   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:58.845900   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:59.061794   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:59.062760   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:59.345948   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:59.561438   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:59.563343   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:59.846721   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:00.062321   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:00.062886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:00.345907   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:00.562238   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:00.562887   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:00.846234   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:01.073719   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:01.074360   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:01.745978   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:01.746699   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:01.746881   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:01.845952   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:02.062966   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:02.063855   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:02.346218   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:02.567994   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:02.568035   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:02.845458   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:03.060902   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:03.062881   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:03.348077   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:03.561927   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:03.562492   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:03.855592   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:04.061772   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:04.062736   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:04.348857   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:04.565096   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:04.566929   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:04.846092   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:05.061544   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:05.065198   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:05.347449   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:05.561552   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:05.564143   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:05.845708   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:06.060851   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:06.062384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:06.346256   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:06.562886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:06.563717   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:06.845433   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:07.060763   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:07.062837   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:07.346643   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:07.561325   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:07.561401   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:07.846121   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:08.062729   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:08.062890   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:08.346916   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:08.561574   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:08.561919   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:08.846702   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:09.061896   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:09.062809   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:09.346018   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:09.561345   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:09.563379   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.199450   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.200298   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:10.210938   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:10.347763   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:10.562175   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:10.562714   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.846342   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:11.061869   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:11.062697   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:11.345762   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:11.561346   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:11.561386   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:11.853306   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:12.068991   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:12.069938   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:12.346460   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:12.565161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:12.565354   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.042819   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:13.071072   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.073339   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:13.346640   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:13.561434   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:13.562984   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.846494   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:14.061133   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:14.062699   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:14.347063   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:14.561804   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:14.562053   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:14.845831   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:15.061408   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:15.063592   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:15.347271   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:15.560686   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:15.562510   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:15.848051   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:16.061445   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:16.061986   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:16.346049   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:16.561316   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:16.561741   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:16.845463   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:17.061841   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:17.062745   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:17.345745   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:17.561903   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:17.563841   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:17.846290   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:18.062696   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:18.063147   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:18.346428   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:18.561284   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:18.561809   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:18.846446   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:19.062022   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:19.063197   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:19.346166   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:19.561888   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:19.561955   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:19.845868   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:20.062681   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:20.067139   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:20.711567   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:20.723248   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:20.728227   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:20.846011   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:21.062068   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:21.062099   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:21.345877   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:21.561017   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:21.562665   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:21.846425   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:22.061624   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:22.063328   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:22.347881   11941 kapi.go:107] duration metric: took 1m13.006433112s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0812 10:23:22.561453   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:22.562808   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:23.061265   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:23.062653   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:23.561621   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:23.562980   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:24.061413   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:24.063026   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:24.561575   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:24.562770   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:25.061652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:25.064117   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:25.561513   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:25.566692   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:26.060844   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:26.062902   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:26.561717   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:26.562306   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:27.062852   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:27.063738   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:27.560860   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:27.563808   11941 kapi.go:107] duration metric: took 1m15.005390738s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0812 10:23:27.565891   11941 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-883541 cluster.
	I0812 10:23:27.567522   11941 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0812 10:23:27.568838   11941 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0812 10:23:28.061160   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:28.561298   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:29.061527   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:29.562630   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:30.067587   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:30.561188   11941 kapi.go:107] duration metric: took 1m19.505418908s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0812 10:23:30.563301   11941 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, helm-tiller, metrics-server, nvidia-device-plugin, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0812 10:23:30.564753   11941 addons.go:510] duration metric: took 1m29.985189619s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner storage-provisioner-rancher inspektor-gadget helm-tiller metrics-server nvidia-device-plugin yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0812 10:23:30.564797   11941 start.go:246] waiting for cluster config update ...
	I0812 10:23:30.564818   11941 start.go:255] writing updated cluster config ...
	I0812 10:23:30.565090   11941 ssh_runner.go:195] Run: rm -f paused
	I0812 10:23:30.616813   11941 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 10:23:30.619131   11941 out.go:177] * Done! kubectl is now configured to use "addons-883541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.001470541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458440001441546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fb16b01-5f13-4d5a-9ec9-a87ecce337f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.002058582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76b774be-fa74-4b79-9dfc-15347adab720 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.002130049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76b774be-fa74-4b79-9dfc-15347adab720 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.005949844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac240509e2e5138f4753e7babec07cc3437d645991f345ed566685b6351c2d6,PodSandboxId:29b3d29305c633de452dd36867f4387c054fa848b4a9c78c66fe7efe1c819f06,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186724172580,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kwc7d,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 50446898-416d-4f60-8873-39df2afc9866,},Annotations:map[string]string{io.kubernetes.container.hash: d1621a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3edc3a24ab1916ea64bfc0fdb218a5b2c79f719140a4b1221dd0e0c45008fd7b,PodSandboxId:98324abca7ce66aada648fbe586a9eb6a0ebc319392e19d14bd6c28dc2200c2d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186567642950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hpjk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68601533-ff19-427b-9d43-efd3eb558184,},Annotations:map[string]string{io.kubernetes.container.hash: 65dde7bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6
e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723458101735593446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76b774be-fa74-4b79-9dfc-15347adab720 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.045604969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46ba4b41-106a-4163-a654-8681a82988f8 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.045683034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46ba4b41-106a-4163-a654-8681a82988f8 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.046899637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58c149cc-910c-4cd6-9238-d1d48ad967b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.048287235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458440048259094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58c149cc-910c-4cd6-9238-d1d48ad967b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.049128174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54c5d0d7-f06b-48e2-93ff-565a7c70eacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.049227100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54c5d0d7-f06b-48e2-93ff-565a7c70eacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.049529419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac240509e2e5138f4753e7babec07cc3437d645991f345ed566685b6351c2d6,PodSandboxId:29b3d29305c633de452dd36867f4387c054fa848b4a9c78c66fe7efe1c819f06,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186724172580,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kwc7d,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 50446898-416d-4f60-8873-39df2afc9866,},Annotations:map[string]string{io.kubernetes.container.hash: d1621a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3edc3a24ab1916ea64bfc0fdb218a5b2c79f719140a4b1221dd0e0c45008fd7b,PodSandboxId:98324abca7ce66aada648fbe586a9eb6a0ebc319392e19d14bd6c28dc2200c2d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186567642950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hpjk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68601533-ff19-427b-9d43-efd3eb558184,},Annotations:map[string]string{io.kubernetes.container.hash: 65dde7bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6
e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723458101735593446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54c5d0d7-f06b-48e2-93ff-565a7c70eacd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.090416541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20d45e22-c30c-4e4c-966c-535af6d5755b name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.090519858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20d45e22-c30c-4e4c-966c-535af6d5755b name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.092064072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80b09a96-d27e-4634-abce-b40e3969b952 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.093475098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458440093444209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80b09a96-d27e-4634-abce-b40e3969b952 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.094158342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee76caa2-8d2b-493b-bedc-0e3d6de17139 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.094258371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee76caa2-8d2b-493b-bedc-0e3d6de17139 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.094548776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac240509e2e5138f4753e7babec07cc3437d645991f345ed566685b6351c2d6,PodSandboxId:29b3d29305c633de452dd36867f4387c054fa848b4a9c78c66fe7efe1c819f06,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186724172580,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kwc7d,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 50446898-416d-4f60-8873-39df2afc9866,},Annotations:map[string]string{io.kubernetes.container.hash: d1621a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3edc3a24ab1916ea64bfc0fdb218a5b2c79f719140a4b1221dd0e0c45008fd7b,PodSandboxId:98324abca7ce66aada648fbe586a9eb6a0ebc319392e19d14bd6c28dc2200c2d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186567642950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hpjk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68601533-ff19-427b-9d43-efd3eb558184,},Annotations:map[string]string{io.kubernetes.container.hash: 65dde7bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6
e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723458101735593446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee76caa2-8d2b-493b-bedc-0e3d6de17139 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.127901543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f327a14-9712-443f-831d-742f4c498deb name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.127980988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f327a14-9712-443f-831d-742f4c498deb name=/runtime.v1.RuntimeService/Version
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.129748303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=114b004a-070b-4b71-82e1-27db9dbf2ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.131157749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458440131114441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=114b004a-070b-4b71-82e1-27db9dbf2ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.131666690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84e2c2b0-2281-47dc-85c1-3ca545fee77e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.131736660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84e2c2b0-2281-47dc-85c1-3ca545fee77e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:27:20 addons-883541 crio[684]: time="2024-08-12 10:27:20.132074129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac240509e2e5138f4753e7babec07cc3437d645991f345ed566685b6351c2d6,PodSandboxId:29b3d29305c633de452dd36867f4387c054fa848b4a9c78c66fe7efe1c819f06,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186724172580,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kwc7d,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 50446898-416d-4f60-8873-39df2afc9866,},Annotations:map[string]string{io.kubernetes.container.hash: d1621a64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3edc3a24ab1916ea64bfc0fdb218a5b2c79f719140a4b1221dd0e0c45008fd7b,PodSandboxId:98324abca7ce66aada648fbe586a9eb6a0ebc319392e19d14bd6c28dc2200c2d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723458186567642950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hpjk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 68601533-ff19-427b-9d43-efd3eb558184,},Annotations:map[string]string{io.kubernetes.container.hash: 65dde7bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6
e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723458101735593446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84e2c2b0-2281-47dc-85c1-3ca545fee77e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11a5e064e6cb5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f88ffbef4425c       hello-world-app-6778b5fc9f-rbqvk
	71a10f35492f5       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   39f1924da0538       nginx
	d28748b211808       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   473e8b06f929f       busybox
	eac240509e2e5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   29b3d29305c63       ingress-nginx-admission-patch-kwc7d
	3edc3a24ab191       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   98324abca7ce6       ingress-nginx-admission-create-8hpjk
	948043f97f132       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   f6ef93ba18dca       metrics-server-c59844bb4-j7r9p
	982e871e7b916       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   c4d16467ed2c0       storage-provisioner
	2533dff57ccee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   6a9282e009c98       coredns-7db6d8ff4d-vgg6r
	30b643ecfade5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             5 minutes ago       Running             kube-proxy                0                   82000d53fdd3a       kube-proxy-dswsl
	10ae02c068a5b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   455c618e0b16c       kube-apiserver-addons-883541
	deaf7b141796f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   e9fce8d5745ee       etcd-addons-883541
	beb7de3bda570       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   ffeadcfa0d6a4       kube-scheduler-addons-883541
	e2fedb989f755       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   0c6ac3b7f06eb       kube-controller-manager-addons-883541
	
	
	==> coredns [2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76] <==
	[INFO] 10.244.0.8:37449 - 50678 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000181869s
	[INFO] 10.244.0.8:45344 - 12007 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180694s
	[INFO] 10.244.0.8:45344 - 25056 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105929s
	[INFO] 10.244.0.8:56326 - 35353 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010087s
	[INFO] 10.244.0.8:56326 - 15463 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000153691s
	[INFO] 10.244.0.8:59153 - 32412 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00020651s
	[INFO] 10.244.0.8:59153 - 39581 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114799s
	[INFO] 10.244.0.8:60170 - 865 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000189236s
	[INFO] 10.244.0.8:60170 - 59747 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00036836s
	[INFO] 10.244.0.8:60979 - 30783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000057422s
	[INFO] 10.244.0.8:60979 - 40242 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000148391s
	[INFO] 10.244.0.8:57768 - 44717 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067885s
	[INFO] 10.244.0.8:57768 - 27054 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025757s
	[INFO] 10.244.0.8:59145 - 983 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083901s
	[INFO] 10.244.0.8:59145 - 6869 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079095s
	[INFO] 10.244.0.22:45813 - 15159 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000366885s
	[INFO] 10.244.0.22:48779 - 22127 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000111368s
	[INFO] 10.244.0.22:44414 - 10326 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177358s
	[INFO] 10.244.0.22:43327 - 52036 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098477s
	[INFO] 10.244.0.22:37897 - 24068 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073768s
	[INFO] 10.244.0.22:37744 - 20700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076627s
	[INFO] 10.244.0.22:52089 - 40911 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00059563s
	[INFO] 10.244.0.22:54749 - 16773 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000363019s
	[INFO] 10.244.0.26:58622 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321352s
	[INFO] 10.244.0.26:39150 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000235232s
	
	
	==> describe nodes <==
	Name:               addons-883541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-883541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=addons-883541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_21_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-883541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:21:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-883541
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:27:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:25:20 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:25:20 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:25:20 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:25:20 +0000   Mon, 12 Aug 2024 10:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-883541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 84cd9f99e87c4addbf07374676c6a3d9
	  System UUID:                84cd9f99-e87c-4add-bf07-374676c6a3d9
	  Boot ID:                    4f64d5e5-194e-41c4-b20c-ff2d6cdb7b8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  default                     hello-world-app-6778b5fc9f-rbqvk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-vgg6r                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m20s
	  kube-system                 etcd-addons-883541                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-apiserver-addons-883541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-controller-manager-addons-883541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kube-system                 kube-proxy-dswsl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-addons-883541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 metrics-server-c59844bb4-j7r9p           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node addons-883541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node addons-883541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node addons-883541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s                  kubelet          Node addons-883541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s                  kubelet          Node addons-883541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s                  kubelet          Node addons-883541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m32s                  kubelet          Node addons-883541 status is now: NodeReady
	  Normal  RegisteredNode           5m21s                  node-controller  Node addons-883541 event: Registered Node addons-883541 in Controller
	
	
	==> dmesg <==
	[ +18.144890] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.121303] kauditd_printk_skb: 32 callbacks suppressed
	[Aug12 10:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.201861] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.347883] kauditd_printk_skb: 60 callbacks suppressed
	[  +8.376878] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.217612] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.058565] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.943992] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.920449] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.900454] kauditd_printk_skb: 15 callbacks suppressed
	[Aug12 10:24] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.293661] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.147476] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.253103] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.298744] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.004786] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.464795] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.105016] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.709935] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.010905] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.865860] kauditd_printk_skb: 6 callbacks suppressed
	[Aug12 10:25] kauditd_printk_skb: 33 callbacks suppressed
	[Aug12 10:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.250250] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5] <==
	{"level":"info","ts":"2024-08-12T10:23:13.024214Z","caller":"traceutil/trace.go:171","msg":"trace[526650710] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1099; }","duration":"189.259235ms","start":"2024-08-12T10:23:12.834949Z","end":"2024-08-12T10:23:13.024208Z","steps":["trace[526650710] 'agreement among raft nodes before linearized reading'  (duration: 189.15869ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:17.507614Z","caller":"traceutil/trace.go:171","msg":"trace[158605441] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"131.198666ms","start":"2024-08-12T10:23:17.376399Z","end":"2024-08-12T10:23:17.507598Z","steps":["trace[158605441] 'process raft request'  (duration: 131.093214ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:20.694861Z","caller":"traceutil/trace.go:171","msg":"trace[894815587] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"428.608268ms","start":"2024-08-12T10:23:20.266236Z","end":"2024-08-12T10:23:20.694844Z","steps":["trace[894815587] 'process raft request'  (duration: 428.358865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.694973Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:20.266221Z","time spent":"428.69643ms","remote":"127.0.0.1:56106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1147 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-12T10:23:20.695244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.816749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-12T10:23:20.695348Z","caller":"traceutil/trace.go:171","msg":"trace[1974372132] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1151; }","duration":"363.931063ms","start":"2024-08-12T10:23:20.331407Z","end":"2024-08-12T10:23:20.695338Z","steps":["trace[1974372132] 'agreement among raft nodes before linearized reading'  (duration: 363.680403ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.695963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:20.331393Z","time spent":"364.553197ms","remote":"127.0.0.1:56124","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-12T10:23:20.695281Z","caller":"traceutil/trace.go:171","msg":"trace[523090185] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"363.465858ms","start":"2024-08-12T10:23:20.331427Z","end":"2024-08-12T10:23:20.694892Z","steps":["trace[523090185] 'read index received'  (duration: 363.457598ms)","trace[523090185] 'applied index is now lower than readState.Index'  (duration: 6.803µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T10:23:20.701725Z","caller":"traceutil/trace.go:171","msg":"trace[829008344] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"109.640284ms","start":"2024-08-12T10:23:20.592072Z","end":"2024-08-12T10:23:20.701712Z","steps":["trace[829008344] 'process raft request'  (duration: 109.478577ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.701841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.860245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-08-12T10:23:20.70188Z","caller":"traceutil/trace.go:171","msg":"trace[1064386221] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1152; }","duration":"153.897536ms","start":"2024-08-12T10:23:20.547975Z","end":"2024-08-12T10:23:20.701872Z","steps":["trace[1064386221] 'agreement among raft nodes before linearized reading'  (duration: 153.828417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.701787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.859955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-12T10:23:20.702348Z","caller":"traceutil/trace.go:171","msg":"trace[1306847445] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1152; }","duration":"156.44611ms","start":"2024-08-12T10:23:20.545893Z","end":"2024-08-12T10:23:20.702339Z","steps":["trace[1306847445] 'agreement among raft nodes before linearized reading'  (duration: 155.744827ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:47.193431Z","caller":"traceutil/trace.go:171","msg":"trace[476845149] linearizableReadLoop","detail":"{readStateIndex:1352; appliedIndex:1351; }","duration":"185.181706ms","start":"2024-08-12T10:23:47.008207Z","end":"2024-08-12T10:23:47.193389Z","steps":["trace[476845149] 'read index received'  (duration: 185.001256ms)","trace[476845149] 'applied index is now lower than readState.Index'  (duration: 179.543µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T10:23:47.193565Z","caller":"traceutil/trace.go:171","msg":"trace[1877012934] transaction","detail":"{read_only:false; response_revision:1307; number_of_response:1; }","duration":"356.023905ms","start":"2024-08-12T10:23:46.837525Z","end":"2024-08-12T10:23:47.193548Z","steps":["trace[1877012934] 'process raft request'  (duration: 355.739703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.95883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T10:23:47.193719Z","caller":"traceutil/trace.go:171","msg":"trace[1807069227] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1307; }","duration":"109.04786ms","start":"2024-08-12T10:23:47.084662Z","end":"2024-08-12T10:23:47.19371Z","steps":["trace[1807069227] 'agreement among raft nodes before linearized reading'  (duration: 108.961767ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.581372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-12T10:23:47.193815Z","caller":"traceutil/trace.go:171","msg":"trace[872508521] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1307; }","duration":"185.633208ms","start":"2024-08-12T10:23:47.008175Z","end":"2024-08-12T10:23:47.193808Z","steps":["trace[872508521] 'agreement among raft nodes before linearized reading'  (duration: 185.576449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193719Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:46.837508Z","time spent":"356.080043ms","remote":"127.0.0.1:56106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1300 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-12T10:24:34.816677Z","caller":"traceutil/trace.go:171","msg":"trace[453538121] transaction","detail":"{read_only:false; response_revision:1631; number_of_response:1; }","duration":"136.502037ms","start":"2024-08-12T10:24:34.680108Z","end":"2024-08-12T10:24:34.81661Z","steps":["trace[453538121] 'process raft request'  (duration: 136.106999ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:25:05.80678Z","caller":"traceutil/trace.go:171","msg":"trace[1928305594] linearizableReadLoop","detail":"{readStateIndex:1970; appliedIndex:1969; }","duration":"163.064295ms","start":"2024-08-12T10:25:05.643702Z","end":"2024-08-12T10:25:05.806766Z","steps":["trace[1928305594] 'read index received'  (duration: 162.940344ms)","trace[1928305594] 'applied index is now lower than readState.Index'  (duration: 123.483µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T10:25:05.806939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.199762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-snapshotter\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T10:25:05.806966Z","caller":"traceutil/trace.go:171","msg":"trace[253925166] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-snapshotter; range_end:; response_count:0; response_revision:1901; }","duration":"163.283685ms","start":"2024-08-12T10:25:05.643676Z","end":"2024-08-12T10:25:05.806959Z","steps":["trace[253925166] 'agreement among raft nodes before linearized reading'  (duration: 163.166647ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:25:05.807259Z","caller":"traceutil/trace.go:171","msg":"trace[265932206] transaction","detail":"{read_only:false; response_revision:1901; number_of_response:1; }","duration":"190.510699ms","start":"2024-08-12T10:25:05.616735Z","end":"2024-08-12T10:25:05.807246Z","steps":["trace[265932206] 'process raft request'  (duration: 189.948942ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:27:20 up 6 min,  0 users,  load average: 0.17, 0.89, 0.53
	Linux addons-883541 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6] <==
	I0812 10:23:36.259655       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0812 10:23:41.106405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:59544: use of closed network connection
	E0812 10:23:41.293982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:59578: use of closed network connection
	E0812 10:24:15.983870       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.215:8443->10.244.0.28:35508: read: connection reset by peer
	E0812 10:24:17.577121       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0812 10:24:25.200927       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.224.51"}
	I0812 10:24:39.037308       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0812 10:24:48.274151       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0812 10:24:48.459708       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.219.239"}
	I0812 10:24:51.935980       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0812 10:24:53.002364       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0812 10:25:08.404299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.404351       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.429517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.429578       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.466949       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.467058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.474271       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.474320       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.489264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.489307       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0812 10:25:09.475201       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0812 10:25:09.489327       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0812 10:25:09.501283       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0812 10:27:10.212584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.240.219"}
	
	
	==> kube-controller-manager [e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857] <==
	W0812 10:26:01.521201       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:01.521335       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:26:16.145031       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:16.145184       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:26:21.504633       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:21.504748       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:26:22.011960       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:22.012115       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:26:47.358909       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:47.358944       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:26:59.914942       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:26:59.914984       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:27:05.670414       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:27:05.670464       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:27:09.954701       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:27:09.954747       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0812 10:27:10.036316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="27.404519ms"
	I0812 10:27:10.054324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="17.467055ms"
	I0812 10:27:10.089085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="34.708051ms"
	I0812 10:27:10.089205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="70.425µs"
	I0812 10:27:12.092570       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0812 10:27:12.096644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="6.297µs"
	I0812 10:27:12.100490       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0812 10:27:13.489614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.296214ms"
	I0812 10:27:13.489699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="33.291µs"
	
	
	==> kube-proxy [30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338] <==
	I0812 10:22:02.161935       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:22:02.178980       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	I0812 10:22:02.273619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:22:02.273655       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:22:02.273671       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:22:02.278628       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:22:02.278819       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:22:02.278837       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:22:02.280390       1 config.go:192] "Starting service config controller"
	I0812 10:22:02.280400       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:22:02.280422       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:22:02.280426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:22:02.280766       1 config.go:319] "Starting node config controller"
	I0812 10:22:02.280772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:22:02.381501       1 shared_informer.go:320] Caches are synced for node config
	I0812 10:22:02.381561       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:22:02.381580       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d] <==
	W0812 10:21:44.468455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:21:44.468480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 10:21:44.468461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:21:44.468495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:21:44.468529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:21:44.468558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:21:44.468686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:44.468755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.301708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 10:21:45.301756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 10:21:45.332150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:45.332195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.598236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:21:45.598413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:21:45.619729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 10:21:45.620565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 10:21:45.635218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:45.636126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.731093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:21:45.731243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:21:45.777087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:21:45.778097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:21:45.974718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 10:21:45.974809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 10:21:47.958278       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 10:27:10 addons-883541 kubelet[1260]: I0812 10:27:10.047043    1260 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1a8f3c-e811-42d5-8439-698c67e08c00" containerName="task-pv-container"
	Aug 12 10:27:10 addons-883541 kubelet[1260]: I0812 10:27:10.047047    1260 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacd9827-23a1-4a79-8983-9fb972a22964" containerName="volume-snapshot-controller"
	Aug 12 10:27:10 addons-883541 kubelet[1260]: I0812 10:27:10.086544    1260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqzg\" (UniqueName: \"kubernetes.io/projected/653f616f-3126-4077-84a6-1add780ba5b3-kube-api-access-hcqzg\") pod \"hello-world-app-6778b5fc9f-rbqvk\" (UID: \"653f616f-3126-4077-84a6-1add780ba5b3\") " pod="default/hello-world-app-6778b5fc9f-rbqvk"
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.195247    1260 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77t8d\" (UniqueName: \"kubernetes.io/projected/06067b49-111f-4363-8bb3-2007070757ee-kube-api-access-77t8d\") pod \"06067b49-111f-4363-8bb3-2007070757ee\" (UID: \"06067b49-111f-4363-8bb3-2007070757ee\") "
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.197987    1260 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06067b49-111f-4363-8bb3-2007070757ee-kube-api-access-77t8d" (OuterVolumeSpecName: "kube-api-access-77t8d") pod "06067b49-111f-4363-8bb3-2007070757ee" (UID: "06067b49-111f-4363-8bb3-2007070757ee"). InnerVolumeSpecName "kube-api-access-77t8d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.295791    1260 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-77t8d\" (UniqueName: \"kubernetes.io/projected/06067b49-111f-4363-8bb3-2007070757ee-kube-api-access-77t8d\") on node \"addons-883541\" DevicePath \"\""
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.456473    1260 scope.go:117] "RemoveContainer" containerID="e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275"
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.493594    1260 scope.go:117] "RemoveContainer" containerID="e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275"
	Aug 12 10:27:11 addons-883541 kubelet[1260]: E0812 10:27:11.494330    1260 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275\": container with ID starting with e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275 not found: ID does not exist" containerID="e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275"
	Aug 12 10:27:11 addons-883541 kubelet[1260]: I0812 10:27:11.494379    1260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275"} err="failed to get container status \"e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275\": rpc error: code = NotFound desc = could not find container \"e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275\": container with ID starting with e756cee9322219f613b1a8a02db05b82035abc0dc386ce67b5f0f4c3ae999275 not found: ID does not exist"
	Aug 12 10:27:13 addons-883541 kubelet[1260]: I0812 10:27:13.155147    1260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06067b49-111f-4363-8bb3-2007070757ee" path="/var/lib/kubelet/pods/06067b49-111f-4363-8bb3-2007070757ee/volumes"
	Aug 12 10:27:13 addons-883541 kubelet[1260]: I0812 10:27:13.155591    1260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50446898-416d-4f60-8873-39df2afc9866" path="/var/lib/kubelet/pods/50446898-416d-4f60-8873-39df2afc9866/volumes"
	Aug 12 10:27:13 addons-883541 kubelet[1260]: I0812 10:27:13.156152    1260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68601533-ff19-427b-9d43-efd3eb558184" path="/var/lib/kubelet/pods/68601533-ff19-427b-9d43-efd3eb558184/volumes"
	Aug 12 10:27:14 addons-883541 kubelet[1260]: I0812 10:27:14.151924    1260 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.325923    1260 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2md98\" (UniqueName: \"kubernetes.io/projected/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-kube-api-access-2md98\") pod \"cb8c0719-79ca-42a6-ab6f-88e8e6a528b7\" (UID: \"cb8c0719-79ca-42a6-ab6f-88e8e6a528b7\") "
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.325979    1260 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-webhook-cert\") pod \"cb8c0719-79ca-42a6-ab6f-88e8e6a528b7\" (UID: \"cb8c0719-79ca-42a6-ab6f-88e8e6a528b7\") "
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.329418    1260 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cb8c0719-79ca-42a6-ab6f-88e8e6a528b7" (UID: "cb8c0719-79ca-42a6-ab6f-88e8e6a528b7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.329883    1260 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-kube-api-access-2md98" (OuterVolumeSpecName: "kube-api-access-2md98") pod "cb8c0719-79ca-42a6-ab6f-88e8e6a528b7" (UID: "cb8c0719-79ca-42a6-ab6f-88e8e6a528b7"). InnerVolumeSpecName "kube-api-access-2md98". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.426552    1260 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2md98\" (UniqueName: \"kubernetes.io/projected/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-kube-api-access-2md98\") on node \"addons-883541\" DevicePath \"\""
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.426590    1260 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7-webhook-cert\") on node \"addons-883541\" DevicePath \"\""
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.476091    1260 scope.go:117] "RemoveContainer" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.501958    1260 scope.go:117] "RemoveContainer" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: E0812 10:27:15.502846    1260 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": container with ID starting with 5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e not found: ID does not exist" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.502892    1260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"} err="failed to get container status \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": rpc error: code = NotFound desc = could not find container \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": container with ID starting with 5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e not found: ID does not exist"
	Aug 12 10:27:17 addons-883541 kubelet[1260]: I0812 10:27:17.157801    1260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8c0719-79ca-42a6-ab6f-88e8e6a528b7" path="/var/lib/kubelet/pods/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7/volumes"
	
	
	==> storage-provisioner [982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af] <==
	I0812 10:22:09.384105       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 10:22:09.421912       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 10:22:09.421959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 10:22:09.439730       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 10:22:09.440343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"04becd8f-d2b0-4a27-8098-732cb8ea640c", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87 became leader
	I0812 10:22:09.440395       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87!
	I0812 10:22:09.541253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-883541 -n addons-883541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-883541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (334.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.793487ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-j7r9p" [64cd8192-55f2-4d23-8337-068eddc6126c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004695499s
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (95.595361ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m12.694783476s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (82.174576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m15.302022877s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (67.602137ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m21.21295612s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (70.05076ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m27.064638289s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (66.187678ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m36.083805633s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (71.890374ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 2m47.953029526s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (66.169158ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 3m9.655190625s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (67.60649ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 3m52.316164804s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (65.293609ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 4m42.950532416s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (65.135423ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 6m9.835004223s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (61.245423ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 6m42.66604034s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-883541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-883541 top pods -n kube-system: exit status 1 (66.075542ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vgg6r, age: 7m38.583276037s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-883541 -n addons-883541
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 logs -n 25: (1.244381781s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-850332                                                                     | download-only-850332 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-087798 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | binary-mirror-087798                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38789                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-087798                                                                     | binary-mirror-087798 | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC |                     |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-883541 --wait=true                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:21 UTC | 12 Aug 24 10:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:23 UTC | 12 Aug 24 10:23 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:23 UTC | 12 Aug 24 10:24 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-883541 ssh cat                                                                       | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | /opt/local-path-provisioner/pvc-1f7cbad0-48c1-4940-b719-ed56d7f5b5f3_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-883541 ip                                                                            | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | -p addons-883541                                                                            |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | -p addons-883541                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC | 12 Aug 24 10:24 UTC |
	|         | addons-883541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-883541 ssh curl -s                                                                   | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-883541 addons                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:25 UTC | 12 Aug 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-883541 addons                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:25 UTC | 12 Aug 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-883541 ip                                                                            | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-883541 addons disable                                                                | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:27 UTC | 12 Aug 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-883541 addons                                                                        | addons-883541        | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:21:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:21:08.010162   11941 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:21:08.010413   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:21:08.010423   11941 out.go:304] Setting ErrFile to fd 2...
	I0812 10:21:08.010429   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:21:08.010649   11941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:21:08.011274   11941 out.go:298] Setting JSON to false
	I0812 10:21:08.012117   11941 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":209,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:21:08.012179   11941 start.go:139] virtualization: kvm guest
	I0812 10:21:08.014249   11941 out.go:177] * [addons-883541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:21:08.015719   11941 notify.go:220] Checking for updates...
	I0812 10:21:08.015736   11941 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:21:08.017075   11941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:21:08.018615   11941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:21:08.020026   11941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:08.021255   11941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:21:08.022824   11941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:21:08.024404   11941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:21:08.057326   11941 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 10:21:08.058616   11941 start.go:297] selected driver: kvm2
	I0812 10:21:08.058630   11941 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:21:08.058644   11941 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:21:08.059335   11941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:21:08.059425   11941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:21:08.074950   11941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:21:08.075013   11941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:21:08.075258   11941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:21:08.075288   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:08.075298   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:08.075309   11941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:21:08.075388   11941 start.go:340] cluster config:
	{Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:21:08.075506   11941 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:21:08.077605   11941 out.go:177] * Starting "addons-883541" primary control-plane node in "addons-883541" cluster
	I0812 10:21:08.079120   11941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:21:08.079168   11941 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:21:08.079181   11941 cache.go:56] Caching tarball of preloaded images
	I0812 10:21:08.079273   11941 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:21:08.079285   11941 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:21:08.079596   11941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json ...
	I0812 10:21:08.079622   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json: {Name:mkb5800adfa9cd219cce82c1061d5731703702f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:08.079781   11941 start.go:360] acquireMachinesLock for addons-883541: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:21:08.079838   11941 start.go:364] duration metric: took 42.414µs to acquireMachinesLock for "addons-883541"
	I0812 10:21:08.079863   11941 start.go:93] Provisioning new machine with config: &{Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:21:08.079935   11941 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 10:21:08.081850   11941 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 10:21:08.082017   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:21:08.082068   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:21:08.096756   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0812 10:21:08.097267   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:21:08.097886   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:21:08.097916   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:21:08.098243   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:21:08.098451   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:08.098620   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:08.098767   11941 start.go:159] libmachine.API.Create for "addons-883541" (driver="kvm2")
	I0812 10:21:08.098797   11941 client.go:168] LocalClient.Create starting
	I0812 10:21:08.098836   11941 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:21:08.180288   11941 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:21:08.383408   11941 main.go:141] libmachine: Running pre-create checks...
	I0812 10:21:08.383432   11941 main.go:141] libmachine: (addons-883541) Calling .PreCreateCheck
	I0812 10:21:08.383947   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:08.384420   11941 main.go:141] libmachine: Creating machine...
	I0812 10:21:08.384434   11941 main.go:141] libmachine: (addons-883541) Calling .Create
	I0812 10:21:08.384605   11941 main.go:141] libmachine: (addons-883541) Creating KVM machine...
	I0812 10:21:08.385902   11941 main.go:141] libmachine: (addons-883541) DBG | found existing default KVM network
	I0812 10:21:08.386616   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.386431   11963 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0812 10:21:08.386638   11941 main.go:141] libmachine: (addons-883541) DBG | created network xml: 
	I0812 10:21:08.386648   11941 main.go:141] libmachine: (addons-883541) DBG | <network>
	I0812 10:21:08.386653   11941 main.go:141] libmachine: (addons-883541) DBG |   <name>mk-addons-883541</name>
	I0812 10:21:08.386659   11941 main.go:141] libmachine: (addons-883541) DBG |   <dns enable='no'/>
	I0812 10:21:08.386666   11941 main.go:141] libmachine: (addons-883541) DBG |   
	I0812 10:21:08.386700   11941 main.go:141] libmachine: (addons-883541) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 10:21:08.386713   11941 main.go:141] libmachine: (addons-883541) DBG |     <dhcp>
	I0812 10:21:08.386723   11941 main.go:141] libmachine: (addons-883541) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 10:21:08.386731   11941 main.go:141] libmachine: (addons-883541) DBG |     </dhcp>
	I0812 10:21:08.386764   11941 main.go:141] libmachine: (addons-883541) DBG |   </ip>
	I0812 10:21:08.386786   11941 main.go:141] libmachine: (addons-883541) DBG |   
	I0812 10:21:08.386796   11941 main.go:141] libmachine: (addons-883541) DBG | </network>
	I0812 10:21:08.386804   11941 main.go:141] libmachine: (addons-883541) DBG | 
	I0812 10:21:08.392398   11941 main.go:141] libmachine: (addons-883541) DBG | trying to create private KVM network mk-addons-883541 192.168.39.0/24...
	I0812 10:21:08.460813   11941 main.go:141] libmachine: (addons-883541) DBG | private KVM network mk-addons-883541 192.168.39.0/24 created
	I0812 10:21:08.460853   11941 main.go:141] libmachine: (addons-883541) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 ...
	I0812 10:21:08.460882   11941 main.go:141] libmachine: (addons-883541) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:21:08.460900   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.460784   11963 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:08.460984   11941 main.go:141] libmachine: (addons-883541) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:21:08.743896   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:08.743778   11963 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa...
	I0812 10:21:09.002621   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:09.002470   11963 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/addons-883541.rawdisk...
	I0812 10:21:09.002642   11941 main.go:141] libmachine: (addons-883541) DBG | Writing magic tar header
	I0812 10:21:09.002679   11941 main.go:141] libmachine: (addons-883541) DBG | Writing SSH key tar header
	I0812 10:21:09.002687   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:09.002585   11963 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 ...
	I0812 10:21:09.002698   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541
	I0812 10:21:09.002712   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541 (perms=drwx------)
	I0812 10:21:09.002730   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:21:09.002772   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:21:09.002799   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:21:09.002822   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:21:09.002841   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:21:09.002853   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:21:09.002870   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:21:09.002878   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:21:09.002885   11941 main.go:141] libmachine: (addons-883541) DBG | Checking permissions on dir: /home
	I0812 10:21:09.002898   11941 main.go:141] libmachine: (addons-883541) DBG | Skipping /home - not owner
	I0812 10:21:09.002932   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:21:09.002952   11941 main.go:141] libmachine: (addons-883541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:21:09.002961   11941 main.go:141] libmachine: (addons-883541) Creating domain...
	I0812 10:21:09.003924   11941 main.go:141] libmachine: (addons-883541) define libvirt domain using xml: 
	I0812 10:21:09.003948   11941 main.go:141] libmachine: (addons-883541) <domain type='kvm'>
	I0812 10:21:09.003967   11941 main.go:141] libmachine: (addons-883541)   <name>addons-883541</name>
	I0812 10:21:09.003980   11941 main.go:141] libmachine: (addons-883541)   <memory unit='MiB'>4000</memory>
	I0812 10:21:09.004006   11941 main.go:141] libmachine: (addons-883541)   <vcpu>2</vcpu>
	I0812 10:21:09.004026   11941 main.go:141] libmachine: (addons-883541)   <features>
	I0812 10:21:09.004039   11941 main.go:141] libmachine: (addons-883541)     <acpi/>
	I0812 10:21:09.004048   11941 main.go:141] libmachine: (addons-883541)     <apic/>
	I0812 10:21:09.004056   11941 main.go:141] libmachine: (addons-883541)     <pae/>
	I0812 10:21:09.004063   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004069   11941 main.go:141] libmachine: (addons-883541)   </features>
	I0812 10:21:09.004076   11941 main.go:141] libmachine: (addons-883541)   <cpu mode='host-passthrough'>
	I0812 10:21:09.004081   11941 main.go:141] libmachine: (addons-883541)   
	I0812 10:21:09.004093   11941 main.go:141] libmachine: (addons-883541)   </cpu>
	I0812 10:21:09.004110   11941 main.go:141] libmachine: (addons-883541)   <os>
	I0812 10:21:09.004128   11941 main.go:141] libmachine: (addons-883541)     <type>hvm</type>
	I0812 10:21:09.004138   11941 main.go:141] libmachine: (addons-883541)     <boot dev='cdrom'/>
	I0812 10:21:09.004148   11941 main.go:141] libmachine: (addons-883541)     <boot dev='hd'/>
	I0812 10:21:09.004158   11941 main.go:141] libmachine: (addons-883541)     <bootmenu enable='no'/>
	I0812 10:21:09.004167   11941 main.go:141] libmachine: (addons-883541)   </os>
	I0812 10:21:09.004178   11941 main.go:141] libmachine: (addons-883541)   <devices>
	I0812 10:21:09.004188   11941 main.go:141] libmachine: (addons-883541)     <disk type='file' device='cdrom'>
	I0812 10:21:09.004208   11941 main.go:141] libmachine: (addons-883541)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/boot2docker.iso'/>
	I0812 10:21:09.004225   11941 main.go:141] libmachine: (addons-883541)       <target dev='hdc' bus='scsi'/>
	I0812 10:21:09.004238   11941 main.go:141] libmachine: (addons-883541)       <readonly/>
	I0812 10:21:09.004247   11941 main.go:141] libmachine: (addons-883541)     </disk>
	I0812 10:21:09.004253   11941 main.go:141] libmachine: (addons-883541)     <disk type='file' device='disk'>
	I0812 10:21:09.004265   11941 main.go:141] libmachine: (addons-883541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:21:09.004276   11941 main.go:141] libmachine: (addons-883541)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/addons-883541.rawdisk'/>
	I0812 10:21:09.004283   11941 main.go:141] libmachine: (addons-883541)       <target dev='hda' bus='virtio'/>
	I0812 10:21:09.004288   11941 main.go:141] libmachine: (addons-883541)     </disk>
	I0812 10:21:09.004294   11941 main.go:141] libmachine: (addons-883541)     <interface type='network'>
	I0812 10:21:09.004301   11941 main.go:141] libmachine: (addons-883541)       <source network='mk-addons-883541'/>
	I0812 10:21:09.004307   11941 main.go:141] libmachine: (addons-883541)       <model type='virtio'/>
	I0812 10:21:09.004313   11941 main.go:141] libmachine: (addons-883541)     </interface>
	I0812 10:21:09.004319   11941 main.go:141] libmachine: (addons-883541)     <interface type='network'>
	I0812 10:21:09.004325   11941 main.go:141] libmachine: (addons-883541)       <source network='default'/>
	I0812 10:21:09.004332   11941 main.go:141] libmachine: (addons-883541)       <model type='virtio'/>
	I0812 10:21:09.004337   11941 main.go:141] libmachine: (addons-883541)     </interface>
	I0812 10:21:09.004344   11941 main.go:141] libmachine: (addons-883541)     <serial type='pty'>
	I0812 10:21:09.004349   11941 main.go:141] libmachine: (addons-883541)       <target port='0'/>
	I0812 10:21:09.004363   11941 main.go:141] libmachine: (addons-883541)     </serial>
	I0812 10:21:09.004370   11941 main.go:141] libmachine: (addons-883541)     <console type='pty'>
	I0812 10:21:09.004380   11941 main.go:141] libmachine: (addons-883541)       <target type='serial' port='0'/>
	I0812 10:21:09.004395   11941 main.go:141] libmachine: (addons-883541)     </console>
	I0812 10:21:09.004412   11941 main.go:141] libmachine: (addons-883541)     <rng model='virtio'>
	I0812 10:21:09.004427   11941 main.go:141] libmachine: (addons-883541)       <backend model='random'>/dev/random</backend>
	I0812 10:21:09.004436   11941 main.go:141] libmachine: (addons-883541)     </rng>
	I0812 10:21:09.004447   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004456   11941 main.go:141] libmachine: (addons-883541)     
	I0812 10:21:09.004468   11941 main.go:141] libmachine: (addons-883541)   </devices>
	I0812 10:21:09.004477   11941 main.go:141] libmachine: (addons-883541) </domain>
	I0812 10:21:09.004487   11941 main.go:141] libmachine: (addons-883541) 
	I0812 10:21:09.010499   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:50:75:f2 in network default
	I0812 10:21:09.011103   11941 main.go:141] libmachine: (addons-883541) Ensuring networks are active...
	I0812 10:21:09.011129   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:09.011764   11941 main.go:141] libmachine: (addons-883541) Ensuring network default is active
	I0812 10:21:09.012067   11941 main.go:141] libmachine: (addons-883541) Ensuring network mk-addons-883541 is active
	I0812 10:21:09.012516   11941 main.go:141] libmachine: (addons-883541) Getting domain xml...
	I0812 10:21:09.013134   11941 main.go:141] libmachine: (addons-883541) Creating domain...
	I0812 10:21:10.424149   11941 main.go:141] libmachine: (addons-883541) Waiting to get IP...
	I0812 10:21:10.424797   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.425212   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.425285   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.425161   11963 retry.go:31] will retry after 205.860955ms: waiting for machine to come up
	I0812 10:21:10.632616   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.633142   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.633168   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.633088   11963 retry.go:31] will retry after 339.919384ms: waiting for machine to come up
	I0812 10:21:10.974737   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:10.975182   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:10.975213   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:10.975124   11963 retry.go:31] will retry after 380.644279ms: waiting for machine to come up
	I0812 10:21:11.357601   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:11.357921   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:11.357947   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:11.357868   11963 retry.go:31] will retry after 544.700698ms: waiting for machine to come up
	I0812 10:21:11.904505   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:11.904933   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:11.904962   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:11.904899   11963 retry.go:31] will retry after 662.908472ms: waiting for machine to come up
	I0812 10:21:12.569947   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:12.570484   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:12.570523   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:12.570408   11963 retry.go:31] will retry after 790.630659ms: waiting for machine to come up
	I0812 10:21:13.363042   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:13.363514   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:13.363539   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:13.363476   11963 retry.go:31] will retry after 901.462035ms: waiting for machine to come up
	I0812 10:21:14.267066   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:14.267503   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:14.267533   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:14.267465   11963 retry.go:31] will retry after 1.021341432s: waiting for machine to come up
	I0812 10:21:15.290676   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:15.291073   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:15.291096   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:15.291030   11963 retry.go:31] will retry after 1.713051639s: waiting for machine to come up
	I0812 10:21:17.006538   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:17.006931   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:17.006960   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:17.006881   11963 retry.go:31] will retry after 1.554642738s: waiting for machine to come up
	I0812 10:21:18.563773   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:18.564315   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:18.564343   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:18.564269   11963 retry.go:31] will retry after 1.7660377s: waiting for machine to come up
	I0812 10:21:20.331974   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:20.332362   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:20.332385   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:20.332320   11963 retry.go:31] will retry after 2.252678642s: waiting for machine to come up
	I0812 10:21:22.587099   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:22.587579   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:22.587603   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:22.587553   11963 retry.go:31] will retry after 3.950816065s: waiting for machine to come up
	I0812 10:21:26.542025   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:26.542518   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find current IP address of domain addons-883541 in network mk-addons-883541
	I0812 10:21:26.542552   11941 main.go:141] libmachine: (addons-883541) DBG | I0812 10:21:26.542434   11963 retry.go:31] will retry after 3.939180324s: waiting for machine to come up
	I0812 10:21:30.484567   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.485187   11941 main.go:141] libmachine: (addons-883541) Found IP for machine: 192.168.39.215
	I0812 10:21:30.485204   11941 main.go:141] libmachine: (addons-883541) Reserving static IP address...
	I0812 10:21:30.485232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has current primary IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.485687   11941 main.go:141] libmachine: (addons-883541) DBG | unable to find host DHCP lease matching {name: "addons-883541", mac: "52:54:00:63:c3:eb", ip: "192.168.39.215"} in network mk-addons-883541
	I0812 10:21:30.582378   11941 main.go:141] libmachine: (addons-883541) Reserved static IP address: 192.168.39.215
	I0812 10:21:30.582408   11941 main.go:141] libmachine: (addons-883541) Waiting for SSH to be available...
	I0812 10:21:30.582417   11941 main.go:141] libmachine: (addons-883541) DBG | Getting to WaitForSSH function...
	I0812 10:21:30.585422   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.585953   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.585987   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.586233   11941 main.go:141] libmachine: (addons-883541) DBG | Using SSH client type: external
	I0812 10:21:30.586264   11941 main.go:141] libmachine: (addons-883541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa (-rw-------)
	I0812 10:21:30.586342   11941 main.go:141] libmachine: (addons-883541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:21:30.586363   11941 main.go:141] libmachine: (addons-883541) DBG | About to run SSH command:
	I0812 10:21:30.586383   11941 main.go:141] libmachine: (addons-883541) DBG | exit 0
	I0812 10:21:30.716970   11941 main.go:141] libmachine: (addons-883541) DBG | SSH cmd err, output: <nil>: 
	I0812 10:21:30.717266   11941 main.go:141] libmachine: (addons-883541) KVM machine creation complete!
	I0812 10:21:30.717681   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:30.718229   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:30.718428   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:30.718640   11941 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:21:30.718657   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:21:30.720022   11941 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:21:30.720038   11941 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:21:30.720045   11941 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:21:30.720053   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.722434   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.722825   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.722851   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.723008   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.723192   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.723354   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.723490   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.723650   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.723830   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.723840   11941 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:21:30.820163   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:21:30.820183   11941 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:21:30.820190   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.823026   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.823375   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.823401   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.823618   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.823863   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.824049   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.824232   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.824420   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.824657   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.824674   11941 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:21:30.921757   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:21:30.921839   11941 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:21:30.921852   11941 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:21:30.921862   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:30.922116   11941 buildroot.go:166] provisioning hostname "addons-883541"
	I0812 10:21:30.922147   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:30.922329   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:30.925105   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.925630   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:30.925663   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:30.925876   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:30.926107   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.926367   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:30.926536   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:30.926766   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:30.926931   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:30.926944   11941 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-883541 && echo "addons-883541" | sudo tee /etc/hostname
	I0812 10:21:31.039214   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-883541
	
	I0812 10:21:31.039241   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.042261   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.042624   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.042654   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.042923   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.043155   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.043317   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.043485   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.043638   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.043803   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.043818   11941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-883541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-883541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-883541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:21:31.149590   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:21:31.149628   11941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:21:31.149678   11941 buildroot.go:174] setting up certificates
	I0812 10:21:31.149695   11941 provision.go:84] configureAuth start
	I0812 10:21:31.149707   11941 main.go:141] libmachine: (addons-883541) Calling .GetMachineName
	I0812 10:21:31.149976   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.152745   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.153272   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.153295   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.153520   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.156081   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.156397   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.156423   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.156599   11941 provision.go:143] copyHostCerts
	I0812 10:21:31.156675   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:21:31.156809   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:21:31.156925   11941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:21:31.157001   11941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.addons-883541 san=[127.0.0.1 192.168.39.215 addons-883541 localhost minikube]
	I0812 10:21:31.248717   11941 provision.go:177] copyRemoteCerts
	I0812 10:21:31.248773   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:21:31.248795   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.251420   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.251797   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.251819   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.252023   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.252199   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.252414   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.252563   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.331113   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:21:31.355053   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:21:31.378351   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 10:21:31.401215   11941 provision.go:87] duration metric: took 251.504934ms to configureAuth
	I0812 10:21:31.401246   11941 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:21:31.401453   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:21:31.401542   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.404516   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.404839   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.404885   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.405068   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.405299   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.405438   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.405579   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.405699   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.405853   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.405868   11941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:21:31.665706   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:21:31.665728   11941 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:21:31.665735   11941 main.go:141] libmachine: (addons-883541) Calling .GetURL
	I0812 10:21:31.667016   11941 main.go:141] libmachine: (addons-883541) DBG | Using libvirt version 6000000
	I0812 10:21:31.668924   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.669271   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.669298   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.669395   11941 main.go:141] libmachine: Docker is up and running!
	I0812 10:21:31.669411   11941 main.go:141] libmachine: Reticulating splines...
	I0812 10:21:31.669418   11941 client.go:171] duration metric: took 23.570613961s to LocalClient.Create
	I0812 10:21:31.669440   11941 start.go:167] duration metric: took 23.570674209s to libmachine.API.Create "addons-883541"
	I0812 10:21:31.669449   11941 start.go:293] postStartSetup for "addons-883541" (driver="kvm2")
	I0812 10:21:31.669458   11941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:21:31.669474   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.669741   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:21:31.669764   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.671960   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.672326   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.672359   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.672593   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.672809   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.672986   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.673127   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.751158   11941 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:21:31.755512   11941 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:21:31.755546   11941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:21:31.755621   11941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:21:31.755647   11941 start.go:296] duration metric: took 86.193416ms for postStartSetup
	I0812 10:21:31.755680   11941 main.go:141] libmachine: (addons-883541) Calling .GetConfigRaw
	I0812 10:21:31.756321   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.758891   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.759214   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.759232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.759572   11941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/config.json ...
	I0812 10:21:31.759819   11941 start.go:128] duration metric: took 23.679872598s to createHost
	I0812 10:21:31.759845   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.762441   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.762765   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.762794   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.762923   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.763161   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.763367   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.763543   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.763732   11941 main.go:141] libmachine: Using SSH client type: native
	I0812 10:21:31.763896   11941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 10:21:31.763905   11941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:21:31.861590   11941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723458091.838434418
	
	I0812 10:21:31.861621   11941 fix.go:216] guest clock: 1723458091.838434418
	I0812 10:21:31.861632   11941 fix.go:229] Guest: 2024-08-12 10:21:31.838434418 +0000 UTC Remote: 2024-08-12 10:21:31.75983237 +0000 UTC m=+23.782995760 (delta=78.602048ms)
	I0812 10:21:31.861673   11941 fix.go:200] guest clock delta is within tolerance: 78.602048ms
	I0812 10:21:31.861689   11941 start.go:83] releasing machines lock for "addons-883541", held for 23.78183708s
	I0812 10:21:31.861720   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.861989   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:31.864913   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.865286   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.865316   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.865447   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.865896   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.866104   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:21:31.866242   11941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:21:31.866279   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.866340   11941 ssh_runner.go:195] Run: cat /version.json
	I0812 10:21:31.866365   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:21:31.869201   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869340   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869554   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.869589   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869689   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.869803   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:31.869825   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:31.869864   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.869979   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:21:31.870043   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.870112   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:21:31.870181   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.870223   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:21:31.870351   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:21:31.941556   11941 ssh_runner.go:195] Run: systemctl --version
	I0812 10:21:31.984093   11941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:21:32.143967   11941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:21:32.150030   11941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:21:32.150098   11941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:21:32.165232   11941 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:21:32.165259   11941 start.go:495] detecting cgroup driver to use...
	I0812 10:21:32.165333   11941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:21:32.181149   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:21:32.195218   11941 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:21:32.195291   11941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:21:32.209540   11941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:21:32.223886   11941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:21:32.336364   11941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:21:32.477052   11941 docker.go:233] disabling docker service ...
	I0812 10:21:32.477125   11941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:21:32.490680   11941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:21:32.503560   11941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:21:32.638938   11941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:21:32.748297   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:21:32.762174   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:21:32.779947   11941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:21:32.780000   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.790168   11941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:21:32.790225   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.800410   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.810497   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.820384   11941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:21:32.830935   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.841148   11941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.857581   11941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:21:32.867677   11941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:21:32.877793   11941 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:21:32.877858   11941 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:21:32.891675   11941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:21:32.901886   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:21:33.012932   11941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:21:33.147893   11941 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:21:33.147981   11941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:21:33.152587   11941 start.go:563] Will wait 60s for crictl version
	I0812 10:21:33.152658   11941 ssh_runner.go:195] Run: which crictl
	I0812 10:21:33.156180   11941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:21:33.191537   11941 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:21:33.191670   11941 ssh_runner.go:195] Run: crio --version
	I0812 10:21:33.218953   11941 ssh_runner.go:195] Run: crio --version
	I0812 10:21:33.246760   11941 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:21:33.248440   11941 main.go:141] libmachine: (addons-883541) Calling .GetIP
	I0812 10:21:33.251010   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:33.251400   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:21:33.251430   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:21:33.251688   11941 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:21:33.255824   11941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:21:33.268324   11941 kubeadm.go:883] updating cluster {Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:21:33.268424   11941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:21:33.268464   11941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:21:33.299877   11941 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 10:21:33.299939   11941 ssh_runner.go:195] Run: which lz4
	I0812 10:21:33.303751   11941 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 10:21:33.307521   11941 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 10:21:33.307554   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 10:21:34.570282   11941 crio.go:462] duration metric: took 1.266569953s to copy over tarball
	I0812 10:21:34.570348   11941 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 10:21:36.840842   11941 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.270465129s)
	I0812 10:21:36.840884   11941 crio.go:469] duration metric: took 2.270574682s to extract the tarball
	I0812 10:21:36.840895   11941 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 10:21:36.879419   11941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:21:36.919962   11941 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:21:36.919982   11941 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:21:36.919990   11941 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.30.3 crio true true} ...
	I0812 10:21:36.920098   11941 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-883541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:21:36.920166   11941 ssh_runner.go:195] Run: crio config
	I0812 10:21:36.965561   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:36.965580   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:36.965592   11941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:21:36.965620   11941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-883541 NodeName:addons-883541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:21:36.965751   11941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-883541"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:21:36.965808   11941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:21:36.974948   11941 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:21:36.975016   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 10:21:36.983862   11941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:21:36.999413   11941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:21:37.014760   11941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 10:21:37.030346   11941 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0812 10:21:37.033991   11941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:21:37.045109   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:21:37.153394   11941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:21:37.169392   11941 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541 for IP: 192.168.39.215
	I0812 10:21:37.169420   11941 certs.go:194] generating shared ca certs ...
	I0812 10:21:37.169441   11941 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.169616   11941 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:21:37.336443   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt ...
	I0812 10:21:37.336473   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt: {Name:mkbc3c098125ac3f2522015cca30de670fccd979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.336667   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key ...
	I0812 10:21:37.336681   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key: {Name:mkec40ed0841edc5c74ce2487e55b2bbbd544e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.336779   11941 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:21:37.389583   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt ...
	I0812 10:21:37.389612   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt: {Name:mk8633c1d66058e3480370fbf9bbb60bf08b3700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.389787   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key ...
	I0812 10:21:37.389801   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key: {Name:mk93371649518188ee90e0d9a0f5b731c74219a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.389895   11941 certs.go:256] generating profile certs ...
	I0812 10:21:37.389946   11941 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key
	I0812 10:21:37.389960   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt with IP's: []
	I0812 10:21:37.470209   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt ...
	I0812 10:21:37.470245   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: {Name:mk4bcb5ba14ae75cb3839a7116df1154e0ebaace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.470457   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key ...
	I0812 10:21:37.470474   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.key: {Name:mk169e7849142fd205bf40be584d56d7a263eb48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.470590   11941 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01
	I0812 10:21:37.470613   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0812 10:21:37.601505   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 ...
	I0812 10:21:37.601538   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01: {Name:mk6549d9577dee251c862ca81280d8fa57a7529b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.601746   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01 ...
	I0812 10:21:37.601764   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01: {Name:mkc66ae42aef29b5d7d41ff23f8c94d434115cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.601886   11941 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt.17dffe01 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt
	I0812 10:21:37.601990   11941 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key.17dffe01 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key
	I0812 10:21:37.602053   11941 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key
	I0812 10:21:37.602074   11941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt with IP's: []
	I0812 10:21:37.791331   11941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt ...
	I0812 10:21:37.791366   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt: {Name:mk3a20dfcd3b1fcbad22d815696fb332aaf2298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.791559   11941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key ...
	I0812 10:21:37.791573   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key: {Name:mka2f9a0fd92892fd228d39da8655da0480feac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:21:37.791961   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:21:37.792117   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:21:37.792175   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:21:37.792208   11941 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:21:37.793549   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:21:37.817267   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:21:37.842545   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:21:37.867098   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:21:37.889576   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 10:21:37.911732   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:21:37.934403   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:21:37.956784   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 10:21:37.979305   11941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:21:38.001271   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:21:38.017008   11941 ssh_runner.go:195] Run: openssl version
	I0812 10:21:38.022442   11941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:21:38.032927   11941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.036983   11941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.037045   11941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:21:38.042581   11941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:21:38.053007   11941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:21:38.056773   11941 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:21:38.056832   11941 kubeadm.go:392] StartCluster: {Name:addons-883541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-883541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:21:38.056945   11941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:21:38.057004   11941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:21:38.091776   11941 cri.go:89] found id: ""
	I0812 10:21:38.091858   11941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 10:21:38.101385   11941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 10:21:38.110623   11941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 10:21:38.121912   11941 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 10:21:38.121931   11941 kubeadm.go:157] found existing configuration files:
	
	I0812 10:21:38.121986   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 10:21:38.131112   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 10:21:38.131173   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 10:21:38.142139   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 10:21:38.152654   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 10:21:38.152742   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 10:21:38.164200   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 10:21:38.174761   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 10:21:38.174821   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 10:21:38.184469   11941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 10:21:38.194358   11941 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 10:21:38.194422   11941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 10:21:38.203398   11941 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 10:21:38.264756   11941 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 10:21:38.264815   11941 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 10:21:38.393967   11941 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 10:21:38.394109   11941 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 10:21:38.394250   11941 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 10:21:38.597040   11941 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 10:21:38.757951   11941 out.go:204]   - Generating certificates and keys ...
	I0812 10:21:38.758100   11941 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 10:21:38.758202   11941 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 10:21:38.758308   11941 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 10:21:38.772574   11941 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 10:21:38.902830   11941 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 10:21:39.056775   11941 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 10:21:39.104179   11941 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 10:21:39.104348   11941 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-883541 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0812 10:21:39.152735   11941 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 10:21:39.152926   11941 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-883541 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0812 10:21:39.314940   11941 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 10:21:39.455351   11941 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 10:21:39.629750   11941 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 10:21:39.630006   11941 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 10:21:39.918591   11941 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 10:21:39.994303   11941 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 10:21:40.096562   11941 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 10:21:40.220435   11941 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 10:21:40.286635   11941 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 10:21:40.287365   11941 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 10:21:40.289762   11941 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 10:21:40.291449   11941 out.go:204]   - Booting up control plane ...
	I0812 10:21:40.291551   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 10:21:40.291623   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 10:21:40.291724   11941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 10:21:40.306861   11941 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 10:21:40.307234   11941 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 10:21:40.307324   11941 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 10:21:40.429508   11941 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 10:21:40.429631   11941 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 10:21:40.931141   11941 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.943675ms
	I0812 10:21:40.931266   11941 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 10:21:46.430253   11941 kubeadm.go:310] [api-check] The API server is healthy after 5.502085251s
	I0812 10:21:46.452147   11941 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 10:21:46.471073   11941 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 10:21:46.518182   11941 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 10:21:46.518419   11941 kubeadm.go:310] [mark-control-plane] Marking the node addons-883541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 10:21:46.531891   11941 kubeadm.go:310] [bootstrap-token] Using token: cgb65i.d34ppi7ahda2k1m8
	I0812 10:21:46.533581   11941 out.go:204]   - Configuring RBAC rules ...
	I0812 10:21:46.533736   11941 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 10:21:46.544636   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 10:21:46.559355   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 10:21:46.563726   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 10:21:46.567640   11941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 10:21:46.573235   11941 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 10:21:46.839886   11941 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 10:21:47.286928   11941 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 10:21:47.837132   11941 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 10:21:47.837154   11941 kubeadm.go:310] 
	I0812 10:21:47.837208   11941 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 10:21:47.837215   11941 kubeadm.go:310] 
	I0812 10:21:47.837329   11941 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 10:21:47.837353   11941 kubeadm.go:310] 
	I0812 10:21:47.837402   11941 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 10:21:47.837488   11941 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 10:21:47.837572   11941 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 10:21:47.837581   11941 kubeadm.go:310] 
	I0812 10:21:47.837643   11941 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 10:21:47.837651   11941 kubeadm.go:310] 
	I0812 10:21:47.837709   11941 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 10:21:47.837719   11941 kubeadm.go:310] 
	I0812 10:21:47.837794   11941 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 10:21:47.837900   11941 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 10:21:47.838001   11941 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 10:21:47.838019   11941 kubeadm.go:310] 
	I0812 10:21:47.838103   11941 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 10:21:47.838188   11941 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 10:21:47.838202   11941 kubeadm.go:310] 
	I0812 10:21:47.838300   11941 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cgb65i.d34ppi7ahda2k1m8 \
	I0812 10:21:47.838446   11941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 10:21:47.838487   11941 kubeadm.go:310] 	--control-plane 
	I0812 10:21:47.838497   11941 kubeadm.go:310] 
	I0812 10:21:47.838587   11941 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 10:21:47.838595   11941 kubeadm.go:310] 
	I0812 10:21:47.838714   11941 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cgb65i.d34ppi7ahda2k1m8 \
	I0812 10:21:47.838877   11941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 10:21:47.839018   11941 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 10:21:47.839031   11941 cni.go:84] Creating CNI manager for ""
	I0812 10:21:47.839037   11941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:21:47.841021   11941 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 10:21:47.842386   11941 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 10:21:47.853384   11941 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 10:21:47.873701   11941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 10:21:47.873763   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:47.873838   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-883541 minikube.k8s.io/updated_at=2024_08_12T10_21_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=addons-883541 minikube.k8s.io/primary=true
	I0812 10:21:47.983993   11941 ops.go:34] apiserver oom_adj: -16
	I0812 10:21:47.984058   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:48.484945   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:48.984181   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:49.484311   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:49.984402   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:50.484980   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:50.984076   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:51.484794   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:51.985122   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:52.484335   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:52.985008   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:53.484934   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:53.985003   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:54.485031   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:54.984845   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:55.484834   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:55.984280   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:56.484135   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:56.984721   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:57.484825   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:57.985129   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:58.485000   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:58.984389   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:59.484333   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:21:59.984253   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:22:00.484116   11941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:22:00.578666   11941 kubeadm.go:1113] duration metric: took 12.704955754s to wait for elevateKubeSystemPrivileges
	I0812 10:22:00.578700   11941 kubeadm.go:394] duration metric: took 22.521872839s to StartCluster
	I0812 10:22:00.578723   11941 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:22:00.578841   11941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:22:00.579253   11941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:22:00.579460   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 10:22:00.579490   11941 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:22:00.579562   11941 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0812 10:22:00.579688   11941 addons.go:69] Setting yakd=true in profile "addons-883541"
	I0812 10:22:00.579704   11941 addons.go:69] Setting inspektor-gadget=true in profile "addons-883541"
	I0812 10:22:00.579712   11941 addons.go:69] Setting storage-provisioner=true in profile "addons-883541"
	I0812 10:22:00.579729   11941 addons.go:234] Setting addon yakd=true in "addons-883541"
	I0812 10:22:00.579736   11941 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-883541"
	I0812 10:22:00.579746   11941 addons.go:69] Setting cloud-spanner=true in profile "addons-883541"
	I0812 10:22:00.579749   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:22:00.579758   11941 addons.go:234] Setting addon storage-provisioner=true in "addons-883541"
	I0812 10:22:00.579762   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579765   11941 addons.go:234] Setting addon cloud-spanner=true in "addons-883541"
	I0812 10:22:00.579769   11941 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-883541"
	I0812 10:22:00.579791   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579807   11941 addons.go:69] Setting helm-tiller=true in profile "addons-883541"
	I0812 10:22:00.579812   11941 addons.go:69] Setting default-storageclass=true in profile "addons-883541"
	I0812 10:22:00.579826   11941 addons.go:234] Setting addon helm-tiller=true in "addons-883541"
	I0812 10:22:00.579834   11941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-883541"
	I0812 10:22:00.579849   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.579798   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580178   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580187   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580208   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580221   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580238   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580296   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580328   11941 addons.go:69] Setting registry=true in profile "addons-883541"
	I0812 10:22:00.579720   11941 addons.go:69] Setting volcano=true in profile "addons-883541"
	I0812 10:22:00.580358   11941 addons.go:234] Setting addon registry=true in "addons-883541"
	I0812 10:22:00.579800   11941 addons.go:69] Setting gcp-auth=true in profile "addons-883541"
	I0812 10:22:00.579740   11941 addons.go:234] Setting addon inspektor-gadget=true in "addons-883541"
	I0812 10:22:00.580361   11941 addons.go:234] Setting addon volcano=true in "addons-883541"
	I0812 10:22:00.580372   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580379   11941 mustload.go:65] Loading cluster: addons-883541
	I0812 10:22:00.579807   11941 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-883541"
	I0812 10:22:00.580385   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580304   11941 addons.go:69] Setting ingress=true in profile "addons-883541"
	I0812 10:22:00.580423   11941 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-883541"
	I0812 10:22:00.580307   11941 addons.go:69] Setting ingress-dns=true in profile "addons-883541"
	I0812 10:22:00.580438   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580316   11941 addons.go:69] Setting volumesnapshots=true in profile "addons-883541"
	I0812 10:22:00.580459   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580464   11941 addons.go:234] Setting addon volumesnapshots=true in "addons-883541"
	I0812 10:22:00.580328   11941 addons.go:69] Setting metrics-server=true in profile "addons-883541"
	I0812 10:22:00.580484   11941 addons.go:234] Setting addon metrics-server=true in "addons-883541"
	I0812 10:22:00.580319   11941 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-883541"
	I0812 10:22:00.580441   11941 addons.go:234] Setting addon ingress-dns=true in "addons-883541"
	I0812 10:22:00.580501   11941 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-883541"
	I0812 10:22:00.580446   11941 addons.go:234] Setting addon ingress=true in "addons-883541"
	I0812 10:22:00.580557   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580588   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580593   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580921   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.580926   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580947   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580947   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580965   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.580976   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580990   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580996   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581063   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581174   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581303   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581328   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581361   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581376   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581405   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.580922   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581383   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.581465   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581472   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581490   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581632   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581674   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.581869   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.581903   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.582020   11941 config.go:182] Loaded profile config "addons-883541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:22:00.588973   11941 out.go:177] * Verifying Kubernetes components...
	I0812 10:22:00.593220   11941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:22:00.600836   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0812 10:22:00.600850   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0812 10:22:00.601177   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0812 10:22:00.601325   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.601474   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.602016   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.602038   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.602160   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.602175   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.602376   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.602847   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0812 10:22:00.602944   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.602973   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.602987   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.603067   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0812 10:22:00.603262   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.603365   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.603885   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.603905   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.603961   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.604102   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.604118   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.604473   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.604487   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.604530   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.604571   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.615057   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0812 10:22:00.615171   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0812 10:22:00.621154   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.621362   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621432   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621441   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621466   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621762   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621787   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621811   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621812   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.621903   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.621915   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.622072   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0812 10:22:00.622217   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.622243   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.623221   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.623324   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.623398   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.629356   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629371   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629384   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.629391   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.629560   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.629574   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.630326   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630405   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630435   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.630895   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.630936   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.631428   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.631448   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.631492   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.631525   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.659083   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0812 10:22:00.659802   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.660454   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.660477   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.660889   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.661097   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.661666   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0812 10:22:00.662146   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.662701   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.662718   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.662839   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0812 10:22:00.663147   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.663214   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.663234   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.663291   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I0812 10:22:00.663941   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.663975   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.664210   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 10:22:00.664223   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.664283   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0812 10:22:00.664744   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.664761   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.664807   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.664950   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0812 10:22:00.665191   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.665260   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.665414   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.665435   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.665633   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.665777   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.665797   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.666112   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.666301   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.667061   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0812 10:22:00.667230   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.667663   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.667682   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.667764   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0812 10:22:00.669111   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.669189   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.669744   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.670725   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.670744   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.670814   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
	I0812 10:22:00.671009   11941 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-883541"
	I0812 10:22:00.671046   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.671110   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0812 10:22:00.671408   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.671441   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.671442   11941 addons.go:234] Setting addon default-storageclass=true in "addons-883541"
	I0812 10:22:00.671474   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.671624   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.671733   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.671829   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.671850   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.672044   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.672058   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0812 10:22:00.672064   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.672373   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.672859   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.672931   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.673245   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.675255   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0812 10:22:00.675580   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.675893   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.676040   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.676513   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.676938   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:00.677246   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.677287   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.677568   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.677603   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.678524   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0812 10:22:00.678697   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0812 10:22:00.678754   11941 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0812 10:22:00.678771   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0812 10:22:00.679081   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0812 10:22:00.679197   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.679628   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.679691   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.679713   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.680191   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.680260   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.680277   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.680335   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 10:22:00.680349   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0812 10:22:00.680386   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.680456   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.680548   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35073
	I0812 10:22:00.680750   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.681168   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.681199   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.681716   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0812 10:22:00.681755   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.681775   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.681844   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.681915   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.681930   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.682439   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.682457   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.682861   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.683002   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.683015   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.683271   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.683837   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.683898   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.684134   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0812 10:22:00.684212   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.684294   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:00.684311   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:00.686262   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:00.686261   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.686284   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:00.686294   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:00.686306   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:00.686313   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:00.686333   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.686623   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:00.686649   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:00.686656   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:00.686685   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W0812 10:22:00.686709   11941 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0812 10:22:00.687045   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.687274   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.687351   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.687548   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.687763   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.687849   11941 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0812 10:22:00.687880   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 10:22:00.687899   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 10:22:00.687918   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.687962   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.688080   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.688485   11941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 10:22:00.688585   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0812 10:22:00.689095   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0812 10:22:00.689113   11941 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0812 10:22:00.689130   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.689333   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.689961   11941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:22:00.689981   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 10:22:00.689998   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.690468   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.690486   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.692329   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.692814   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.692833   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.693033   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.693214   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.693421   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.693562   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.693985   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695057   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.695079   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695105   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695489   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.695517   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.695520   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.695889   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.695928   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.696216   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.696262   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.696542   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.696822   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.697025   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.697240   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.697438   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.702140   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.704301   11941 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0812 10:22:00.706177   11941 out.go:177]   - Using image docker.io/registry:2.8.3
	I0812 10:22:00.706822   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0812 10:22:00.707437   11941 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0812 10:22:00.707458   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0812 10:22:00.707479   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.707438   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.707989   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.708006   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.708383   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.708580   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.711015   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.711476   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.711902   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.711932   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.712097   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.712319   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.712576   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.712649   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0812 10:22:00.712988   11941 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0812 10:22:00.713148   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0812 10:22:00.713172   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.713567   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.714067   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.714093   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.714433   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.714488   11941 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 10:22:00.714508   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0812 10:22:00.714524   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.714572   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.716702   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.717148   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.717193   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0812 10:22:00.717625   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.717687   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.717785   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.718171   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.718298   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.718313   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.718751   11941 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0812 10:22:00.718972   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.719003   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.719177   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.720553   11941 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0812 10:22:00.720571   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0812 10:22:00.720589   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.720666   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.720714   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.720736   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.720929   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.721035   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.721184   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.721353   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.722425   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0812 10:22:00.723117   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.723849   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.723867   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.724099   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.724154   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.724699   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.724720   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.725040   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.725065   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.725105   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0812 10:22:00.725205   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0812 10:22:00.725425   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.725584   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.725597   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.725801   11941 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0812 10:22:00.725881   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.726056   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.726814   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.727006   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.727036   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.727050   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.727303   11941 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0812 10:22:00.727320   11941 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0812 10:22:00.727345   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.727599   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.728199   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.728224   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.728406   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.728434   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.728715   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0812 10:22:00.728844   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0812 10:22:00.729216   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.729265   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.729446   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.729792   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0812 10:22:00.730060   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.730078   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.730136   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.730598   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.730618   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.730629   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.730989   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.731122   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:00.731192   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.731427   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:00.731474   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:00.733200   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.733680   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.733708   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.733775   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:00.733880   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.733949   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.734150   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.734191   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.734371   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.734719   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.735351   11941 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 10:22:00.735373   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0812 10:22:00.735400   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.736131   11941 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0812 10:22:00.736209   11941 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0812 10:22:00.737592   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0812 10:22:00.737782   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 10:22:00.737794   11941 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 10:22:00.737812   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.737966   11941 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 10:22:00.737974   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0812 10:22:00.737988   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.741257   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.741340   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742280   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742284   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.742315   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742331   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742354   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742390   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.742404   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.742513   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.742568   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742584   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742704   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.742720   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.742747   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.742790   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.742950   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.742954   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.743007   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.743084   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.743198   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.743275   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.743417   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.743539   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.744173   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.744360   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.745878   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	W0812 10:22:00.747149   11941 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56324->192.168.39.215:22: read: connection reset by peer
	I0812 10:22:00.747175   11941 retry.go:31] will retry after 353.172764ms: ssh: handshake failed: read tcp 192.168.39.1:56324->192.168.39.215:22: read: connection reset by peer
	I0812 10:22:00.747812   11941 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0812 10:22:00.749091   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 10:22:00.749111   11941 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 10:22:00.749133   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.752125   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.752300   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0812 10:22:00.752492   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.752507   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.752817   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.753046   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.753146   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.753197   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.753334   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.753657   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.753669   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.753893   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0812 10:22:00.754041   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.754187   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.754223   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:00.754616   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:00.754627   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:00.755458   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:00.755637   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:00.755675   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.755820   11941 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 10:22:00.755829   11941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 10:22:00.755838   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.757456   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:00.759121   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.759257   11941 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0812 10:22:00.759507   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.759527   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.759716   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.759863   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.759971   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.760123   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:00.762040   11941 out.go:177]   - Using image docker.io/busybox:stable
	I0812 10:22:00.763341   11941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 10:22:00.763355   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0812 10:22:00.763371   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:00.766185   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.766508   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:00.766522   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:00.766673   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:00.766799   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:00.766888   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:00.766975   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:01.026133   11941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:22:01.026235   11941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 10:22:01.050165   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 10:22:01.061852   11941 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0812 10:22:01.061880   11941 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0812 10:22:01.156461   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:22:01.183821   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:22:01.195902   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0812 10:22:01.195921   11941 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0812 10:22:01.197772   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 10:22:01.197787   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0812 10:22:01.205568   11941 node_ready.go:35] waiting up to 6m0s for node "addons-883541" to be "Ready" ...
	I0812 10:22:01.210492   11941 node_ready.go:49] node "addons-883541" has status "Ready":"True"
	I0812 10:22:01.210514   11941 node_ready.go:38] duration metric: took 4.9219ms for node "addons-883541" to be "Ready" ...
	I0812 10:22:01.210523   11941 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:22:01.222513   11941 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:01.243720   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0812 10:22:01.260523   11941 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0812 10:22:01.260541   11941 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0812 10:22:01.346995   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 10:22:01.347011   11941 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 10:22:01.350941   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 10:22:01.364618   11941 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 10:22:01.364640   11941 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0812 10:22:01.387333   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 10:22:01.399066   11941 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0812 10:22:01.399091   11941 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0812 10:22:01.409621   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0812 10:22:01.409650   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0812 10:22:01.420126   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0812 10:22:01.420146   11941 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0812 10:22:01.444122   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0812 10:22:01.444144   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0812 10:22:01.501996   11941 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0812 10:22:01.502018   11941 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0812 10:22:01.561076   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 10:22:01.565443   11941 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 10:22:01.565462   11941 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 10:22:01.606773   11941 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0812 10:22:01.606791   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0812 10:22:01.622885   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0812 10:22:01.622912   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0812 10:22:01.640422   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0812 10:22:01.640446   11941 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0812 10:22:01.666855   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0812 10:22:01.666879   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0812 10:22:01.745577   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 10:22:01.748032   11941 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0812 10:22:01.748056   11941 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0812 10:22:01.770990   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 10:22:01.831033   11941 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0812 10:22:01.831055   11941 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0812 10:22:01.831548   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0812 10:22:01.831565   11941 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0812 10:22:01.865924   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0812 10:22:01.875437   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0812 10:22:01.875460   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0812 10:22:01.918434   11941 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0812 10:22:01.918454   11941 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0812 10:22:01.971727   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0812 10:22:01.971755   11941 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0812 10:22:02.035261   11941 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0812 10:22:02.035291   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0812 10:22:02.080530   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0812 10:22:02.080558   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0812 10:22:02.223415   11941 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0812 10:22:02.223443   11941 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0812 10:22:02.304629   11941 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:02.304648   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0812 10:22:02.312743   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0812 10:22:02.462427   11941 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0812 10:22:02.462451   11941 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0812 10:22:02.548822   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:02.553125   11941 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 10:22:02.553148   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0812 10:22:02.772018   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 10:22:02.828012   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0812 10:22:02.828046   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0812 10:22:03.063831   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0812 10:22:03.063859   11941 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0812 10:22:03.117762   11941 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.091492661s)
	I0812 10:22:03.117800   11941 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 10:22:03.256782   11941 pod_ready.go:102] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"False"
	I0812 10:22:03.467068   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0812 10:22:03.467090   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0812 10:22:03.640785   11941 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-883541" context rescaled to 1 replicas
	I0812 10:22:03.794407   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0812 10:22:03.794427   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0812 10:22:04.056170   11941 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 10:22:04.056190   11941 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0812 10:22:04.266658   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.216455276s)
	I0812 10:22:04.266695   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.11020412s)
	I0812 10:22:04.266714   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.266729   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.266716   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.266797   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267068   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267088   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.267097   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.267105   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267126   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:04.267163   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267180   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.267195   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.267206   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.267384   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.267422   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.268779   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:04.268798   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.268818   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.313767   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:04.313795   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:04.314036   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:04.314050   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:04.382426   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 10:22:05.285276   11941 pod_ready.go:102] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"False"
	I0812 10:22:05.960658   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.776802127s)
	I0812 10:22:05.960707   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960718   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.960730   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.609763628s)
	I0812 10:22:05.960754   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960659   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.716901839s)
	I0812 10:22:05.960766   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.960794   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.960811   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.961095   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.961113   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.961124   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.961133   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962928   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.962932   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.962959   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.962965   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.962968   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.962963   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.962976   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.962984   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962934   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963038   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963047   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.963053   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.962937   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.963290   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963307   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963336   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.963345   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.963356   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:05.963368   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:05.983517   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:05.983540   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:05.983894   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:05.983914   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:07.484733   11941 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.484755   11941 pod_ready.go:81] duration metric: took 6.262207003s for pod "coredns-7db6d8ff4d-jn9jq" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.484789   11941 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.603775   11941 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.603809   11941 pod_ready.go:81] duration metric: took 119.011289ms for pod "coredns-7db6d8ff4d-vgg6r" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.603823   11941 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.710519   11941 pod_ready.go:92] pod "etcd-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.710546   11941 pod_ready.go:81] duration metric: took 106.712142ms for pod "etcd-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.710558   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.742447   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0812 10:22:07.742494   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:07.745738   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:07.746200   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:07.746232   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:07.746413   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:07.746646   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:07.746816   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:07.746980   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:07.755898   11941 pod_ready.go:92] pod "kube-apiserver-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.755920   11941 pod_ready.go:81] duration metric: took 45.354609ms for pod "kube-apiserver-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.755929   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.785589   11941 pod_ready.go:92] pod "kube-controller-manager-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.785622   11941 pod_ready.go:81] duration metric: took 29.685304ms for pod "kube-controller-manager-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.785637   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dswsl" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.900844   11941 pod_ready.go:92] pod "kube-proxy-dswsl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:07.900900   11941 pod_ready.go:81] duration metric: took 115.255004ms for pod "kube-proxy-dswsl" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.900914   11941 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:07.937148   11941 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0812 10:22:08.041377   11941 addons.go:234] Setting addon gcp-auth=true in "addons-883541"
	I0812 10:22:08.041423   11941 host.go:66] Checking if "addons-883541" exists ...
	I0812 10:22:08.041770   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:08.041799   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:08.043553   11941 pod_ready.go:92] pod "kube-scheduler-addons-883541" in "kube-system" namespace has status "Ready":"True"
	I0812 10:22:08.043568   11941 pod_ready.go:81] duration metric: took 142.64749ms for pod "kube-scheduler-addons-883541" in "kube-system" namespace to be "Ready" ...
	I0812 10:22:08.043576   11941 pod_ready.go:38] duration metric: took 6.833038323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:22:08.043596   11941 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:22:08.043642   11941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:22:08.057062   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0812 10:22:08.057545   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:08.058073   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:08.058095   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:08.058432   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:08.059028   11941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:22:08.059064   11941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:22:08.075447   11941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0812 10:22:08.075954   11941 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:22:08.076404   11941 main.go:141] libmachine: Using API Version  1
	I0812 10:22:08.076426   11941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:22:08.076734   11941 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:22:08.076955   11941 main.go:141] libmachine: (addons-883541) Calling .GetState
	I0812 10:22:08.078695   11941 main.go:141] libmachine: (addons-883541) Calling .DriverName
	I0812 10:22:08.078963   11941 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0812 10:22:08.078990   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHHostname
	I0812 10:22:08.081673   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:08.082054   11941 main.go:141] libmachine: (addons-883541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:c3:eb", ip: ""} in network mk-addons-883541: {Iface:virbr1 ExpiryTime:2024-08-12 11:21:22 +0000 UTC Type:0 Mac:52:54:00:63:c3:eb Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-883541 Clientid:01:52:54:00:63:c3:eb}
	I0812 10:22:08.082083   11941 main.go:141] libmachine: (addons-883541) DBG | domain addons-883541 has defined IP address 192.168.39.215 and MAC address 52:54:00:63:c3:eb in network mk-addons-883541
	I0812 10:22:08.082240   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHPort
	I0812 10:22:08.082425   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHKeyPath
	I0812 10:22:08.082552   11941 main.go:141] libmachine: (addons-883541) Calling .GetSSHUsername
	I0812 10:22:08.082694   11941 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/addons-883541/id_rsa Username:docker}
	I0812 10:22:09.334248   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.946883426s)
	I0812 10:22:09.334261   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.773154894s)
	I0812 10:22:09.334294   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334309   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334371   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334383   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334368   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.588748326s)
	I0812 10:22:09.334412   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.563388221s)
	I0812 10:22:09.334445   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.468492432s)
	I0812 10:22:09.334460   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334467   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334475   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334481   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334490   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.02172108s)
	I0812 10:22:09.334516   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334529   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334535   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334544   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334678   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.785820296s)
	W0812 10:22:09.334710   11941 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 10:22:09.334737   11941 retry.go:31] will retry after 355.3481ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 10:22:09.334820   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.562764327s)
	I0812 10:22:09.334826   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334842   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334842   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334852   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334861   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334866   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334869   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334874   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334882   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334887   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334891   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334896   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334899   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334905   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334913   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334914   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.334918   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.334927   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.334936   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334944   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.334937   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.334971   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.335375   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.335401   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.335408   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.335417   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.335424   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.335470   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.335488   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.335494   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.335503   11941 addons.go:475] Verifying addon ingress=true in "addons-883541"
	I0812 10:22:09.336042   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.336052   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.336063   11941 addons.go:475] Verifying addon registry=true in "addons-883541"
	I0812 10:22:09.336162   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.336182   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.336187   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337143   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337158   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337406   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337416   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337424   11941 addons.go:475] Verifying addon metrics-server=true in "addons-883541"
	I0812 10:22:09.337724   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337735   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337744   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.337752   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.337857   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.337886   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.337892   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.337910   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:09.337917   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:09.337983   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.338014   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.338020   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.338547   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:09.338600   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:09.338621   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:09.338685   11941 out.go:177] * Verifying ingress addon...
	I0812 10:22:09.339658   11941 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-883541 service yakd-dashboard -n yakd-dashboard
	
	I0812 10:22:09.339692   11941 out.go:177] * Verifying registry addon...
	I0812 10:22:09.341444   11941 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0812 10:22:09.341845   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0812 10:22:09.363895   11941 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0812 10:22:09.363927   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:09.369309   11941 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0812 10:22:09.369340   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:09.691282   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 10:22:09.848385   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:09.851757   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:10.398058   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:10.402272   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:10.862201   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:10.866803   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:11.050974   11941 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.007308719s)
	I0812 10:22:11.051011   11941 api_server.go:72] duration metric: took 10.471491866s to wait for apiserver process to appear ...
	I0812 10:22:11.051018   11941 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:22:11.051035   11941 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0812 10:22:11.051034   11941 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.972053482s)
	I0812 10:22:11.050977   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.668501911s)
	I0812 10:22:11.051135   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.051159   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.051512   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.051531   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.051542   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.051555   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.051792   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.051809   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.051820   11941 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-883541"
	I0812 10:22:11.052641   11941 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 10:22:11.053711   11941 out.go:177] * Verifying csi-hostpath-driver addon...
	I0812 10:22:11.055075   11941 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0812 10:22:11.055766   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0812 10:22:11.056443   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0812 10:22:11.056464   11941 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0812 10:22:11.060247   11941 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0812 10:22:11.061638   11941 api_server.go:141] control plane version: v1.30.3
	I0812 10:22:11.061659   11941 api_server.go:131] duration metric: took 10.636343ms to wait for apiserver health ...
	I0812 10:22:11.061667   11941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:22:11.078712   11941 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0812 10:22:11.078736   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:11.112148   11941 system_pods.go:59] 19 kube-system pods found
	I0812 10:22:11.112180   11941 system_pods.go:61] "coredns-7db6d8ff4d-jn9jq" [951e2ef7-fcae-4716-baa6-a6165ab20cc7] Running
	I0812 10:22:11.112184   11941 system_pods.go:61] "coredns-7db6d8ff4d-vgg6r" [d2d3a2bf-c74b-4317-96a2-2a4917a45e7e] Running
	I0812 10:22:11.112191   11941 system_pods.go:61] "csi-hostpath-attacher-0" [dc2cf19a-dc76-4980-a455-ca84123661e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0812 10:22:11.112195   11941 system_pods.go:61] "csi-hostpath-resizer-0" [78cfc69c-952e-4d16-b8db-047b7ee663ed] Pending
	I0812 10:22:11.112203   11941 system_pods.go:61] "csi-hostpathplugin-pbz4r" [af18ae79-821d-4b0c-9bac-9e1a015ba81c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0812 10:22:11.112207   11941 system_pods.go:61] "etcd-addons-883541" [7c24dcbb-833e-4d32-ad2d-8fae7badf7ae] Running
	I0812 10:22:11.112212   11941 system_pods.go:61] "kube-apiserver-addons-883541" [6e96bb86-808a-4824-9902-9e19d71d23ef] Running
	I0812 10:22:11.112216   11941 system_pods.go:61] "kube-controller-manager-addons-883541" [52bf2c7b-b7f4-4be1-8c6b-6482400096bb] Running
	I0812 10:22:11.112220   11941 system_pods.go:61] "kube-ingress-dns-minikube" [06067b49-111f-4363-8bb3-2007070757ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0812 10:22:11.112224   11941 system_pods.go:61] "kube-proxy-dswsl" [73a29712-f2b7-4371-a3f3-9920d0a4bde5] Running
	I0812 10:22:11.112227   11941 system_pods.go:61] "kube-scheduler-addons-883541" [c4f4ad69-850f-4301-a8dd-21633ca63ca4] Running
	I0812 10:22:11.112231   11941 system_pods.go:61] "metrics-server-c59844bb4-j7r9p" [64cd8192-55f2-4d23-8337-068eddc6126c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 10:22:11.112238   11941 system_pods.go:61] "nvidia-device-plugin-daemonset-r9hqx" [12e175a3-9d78-4c03-af1e-0b8ed635e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0812 10:22:11.112244   11941 system_pods.go:61] "registry-698f998955-xww5t" [bd991983-9d87-471c-b2ac-7cae341f9d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 10:22:11.112249   11941 system_pods.go:61] "registry-proxy-8xczh" [7f708cb9-ae7f-4021-be11-218df27928d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 10:22:11.112254   11941 system_pods.go:61] "snapshot-controller-745499f584-4gwxm" [ee6f839c-444d-4c56-b476-f5a81329f5fc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.112260   11941 system_pods.go:61] "snapshot-controller-745499f584-mmlfj" [cacd9827-23a1-4a79-8983-9fb972a22964] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.112264   11941 system_pods.go:61] "storage-provisioner" [54a9610b-ab55-47f3-943c-2c6f54430fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 10:22:11.112270   11941 system_pods.go:61] "tiller-deploy-6677d64bcd-45ft9" [87ea7eab-fd15-420a-ad1a-20231ebf7ba3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 10:22:11.112278   11941 system_pods.go:74] duration metric: took 50.607016ms to wait for pod list to return data ...
	I0812 10:22:11.112286   11941 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:22:11.121157   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0812 10:22:11.121182   11941 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0812 10:22:11.125160   11941 default_sa.go:45] found service account: "default"
	I0812 10:22:11.125183   11941 default_sa.go:55] duration metric: took 12.89161ms for default service account to be created ...
	I0812 10:22:11.125195   11941 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:22:11.146584   11941 system_pods.go:86] 19 kube-system pods found
	I0812 10:22:11.146638   11941 system_pods.go:89] "coredns-7db6d8ff4d-jn9jq" [951e2ef7-fcae-4716-baa6-a6165ab20cc7] Running
	I0812 10:22:11.146647   11941 system_pods.go:89] "coredns-7db6d8ff4d-vgg6r" [d2d3a2bf-c74b-4317-96a2-2a4917a45e7e] Running
	I0812 10:22:11.146658   11941 system_pods.go:89] "csi-hostpath-attacher-0" [dc2cf19a-dc76-4980-a455-ca84123661e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0812 10:22:11.146666   11941 system_pods.go:89] "csi-hostpath-resizer-0" [78cfc69c-952e-4d16-b8db-047b7ee663ed] Pending
	I0812 10:22:11.146680   11941 system_pods.go:89] "csi-hostpathplugin-pbz4r" [af18ae79-821d-4b0c-9bac-9e1a015ba81c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0812 10:22:11.146691   11941 system_pods.go:89] "etcd-addons-883541" [7c24dcbb-833e-4d32-ad2d-8fae7badf7ae] Running
	I0812 10:22:11.146698   11941 system_pods.go:89] "kube-apiserver-addons-883541" [6e96bb86-808a-4824-9902-9e19d71d23ef] Running
	I0812 10:22:11.146705   11941 system_pods.go:89] "kube-controller-manager-addons-883541" [52bf2c7b-b7f4-4be1-8c6b-6482400096bb] Running
	I0812 10:22:11.146716   11941 system_pods.go:89] "kube-ingress-dns-minikube" [06067b49-111f-4363-8bb3-2007070757ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0812 10:22:11.146728   11941 system_pods.go:89] "kube-proxy-dswsl" [73a29712-f2b7-4371-a3f3-9920d0a4bde5] Running
	I0812 10:22:11.146738   11941 system_pods.go:89] "kube-scheduler-addons-883541" [c4f4ad69-850f-4301-a8dd-21633ca63ca4] Running
	I0812 10:22:11.146751   11941 system_pods.go:89] "metrics-server-c59844bb4-j7r9p" [64cd8192-55f2-4d23-8337-068eddc6126c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 10:22:11.146763   11941 system_pods.go:89] "nvidia-device-plugin-daemonset-r9hqx" [12e175a3-9d78-4c03-af1e-0b8ed635e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0812 10:22:11.146777   11941 system_pods.go:89] "registry-698f998955-xww5t" [bd991983-9d87-471c-b2ac-7cae341f9d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 10:22:11.146789   11941 system_pods.go:89] "registry-proxy-8xczh" [7f708cb9-ae7f-4021-be11-218df27928d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 10:22:11.146801   11941 system_pods.go:89] "snapshot-controller-745499f584-4gwxm" [ee6f839c-444d-4c56-b476-f5a81329f5fc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.146814   11941 system_pods.go:89] "snapshot-controller-745499f584-mmlfj" [cacd9827-23a1-4a79-8983-9fb972a22964] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 10:22:11.146823   11941 system_pods.go:89] "storage-provisioner" [54a9610b-ab55-47f3-943c-2c6f54430fdc] Running
	I0812 10:22:11.146834   11941 system_pods.go:89] "tiller-deploy-6677d64bcd-45ft9" [87ea7eab-fd15-420a-ad1a-20231ebf7ba3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 10:22:11.146846   11941 system_pods.go:126] duration metric: took 21.645227ms to wait for k8s-apps to be running ...
	I0812 10:22:11.146860   11941 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:22:11.146916   11941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:22:11.172515   11941 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 10:22:11.172546   11941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0812 10:22:11.234994   11941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 10:22:11.345459   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:11.348627   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:11.561914   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:11.614520   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.923179123s)
	I0812 10:22:11.614583   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.614601   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.614581   11941 system_svc.go:56] duration metric: took 467.714276ms WaitForService to wait for kubelet
	I0812 10:22:11.614676   11941 kubeadm.go:582] duration metric: took 11.035148966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:22:11.614711   11941 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:22:11.614983   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:11.615030   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.615039   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.615051   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:11.615058   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:11.615278   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:11.615305   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:11.615329   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:11.617990   11941 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:22:11.618020   11941 node_conditions.go:123] node cpu capacity is 2
	I0812 10:22:11.618034   11941 node_conditions.go:105] duration metric: took 3.316232ms to run NodePressure ...
	I0812 10:22:11.618046   11941 start.go:241] waiting for startup goroutines ...
	I0812 10:22:11.847439   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:11.856525   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:12.065206   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:12.360054   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:12.360299   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:12.551544   11941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.316515504s)
	I0812 10:22:12.551592   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:12.551608   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:12.551988   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:12.552008   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:12.552014   11941 main.go:141] libmachine: (addons-883541) DBG | Closing plugin on server side
	I0812 10:22:12.552024   11941 main.go:141] libmachine: Making call to close driver server
	I0812 10:22:12.552033   11941 main.go:141] libmachine: (addons-883541) Calling .Close
	I0812 10:22:12.552268   11941 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:22:12.552281   11941 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:22:12.554255   11941 addons.go:475] Verifying addon gcp-auth=true in "addons-883541"
	I0812 10:22:12.556104   11941 out.go:177] * Verifying gcp-auth addon...
	I0812 10:22:12.558417   11941 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0812 10:22:12.615188   11941 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0812 10:22:12.615217   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:12.615426   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:12.857143   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:12.863141   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.061270   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:13.068029   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:13.347799   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:13.349134   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.563394   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:13.566137   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:13.849177   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:13.850869   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.062258   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:14.062685   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:14.347190   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.350208   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:14.561370   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:14.562720   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:14.846339   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:14.847264   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.069939   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:15.071281   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:15.348272   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.348558   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:15.563161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:15.565234   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:15.847222   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:15.850590   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.060883   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:16.062560   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:16.345734   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.348660   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:16.574487   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:16.582612   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:16.845668   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:16.847153   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:17.177706   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:17.179919   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:17.347431   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:17.349577   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:17.562122   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:17.564245   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:17.847846   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:17.849682   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.062715   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:18.063430   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:18.347456   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:18.348767   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.561715   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:18.563390   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:18.847032   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:18.847094   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.061231   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:19.062246   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:19.346918   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:19.347082   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.561658   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:19.561942   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:19.845930   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:19.846391   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:20.061581   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:20.063431   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:20.346070   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:20.346721   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:20.561733   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:20.563548   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:20.846747   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:20.847101   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:21.061871   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:21.062776   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:21.347783   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:21.347783   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:21.561803   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:21.562824   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:21.945454   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:21.946537   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.061040   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:22.062644   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:22.345191   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:22.348474   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.562204   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:22.562798   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:22.847494   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:22.848036   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.063064   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:23.063493   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:23.347386   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:23.347461   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.562241   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:23.562818   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:23.847031   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:23.847796   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:24.076249   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:24.076739   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:24.347179   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:24.348238   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:24.564497   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:24.564652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:24.848702   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:24.851510   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.062384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:25.063537   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:25.346478   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:25.346673   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.561848   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:25.563059   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:25.848000   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:25.848530   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.061874   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:26.063621   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:26.346055   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.347790   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:26.565960   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:26.566389   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:26.846365   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:26.847333   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.061039   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:27.061499   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:27.346496   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:27.346698   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.561230   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:27.562843   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:27.847820   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:27.847880   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:28.062161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:28.063434   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:28.346681   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:28.348134   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:28.561251   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:28.562186   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:28.847819   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:28.847950   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.061356   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:29.062972   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:29.347284   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.348381   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:29.562348   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:29.564139   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:29.845303   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:29.848217   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.061130   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:30.063011   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:30.351747   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.352539   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:30.562826   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:30.564120   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:30.864169   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:30.865263   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.062658   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:31.063455   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:31.349897   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:31.351523   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.561243   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:31.563182   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:31.845792   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:31.846926   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.063169   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:32.064578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:32.345701   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:32.347668   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.563013   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:32.566190   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:32.846662   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:32.848106   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.061886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:33.062636   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:33.348028   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:33.348329   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.561184   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:33.564331   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:33.847207   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:33.847607   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:34.061618   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:34.061992   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:34.347244   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:34.347336   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:34.561205   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:34.562276   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:34.846728   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:34.847917   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:35.062479   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:35.064609   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:35.348300   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:35.349782   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:35.561454   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:35.562896   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:35.848296   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:35.848448   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.061469   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:36.063313   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:36.346554   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:36.347384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.561418   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:36.562201   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:36.847362   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:36.848451   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:37.061299   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:37.062196   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:37.348112   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:37.348419   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:37.561300   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:37.562673   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:37.859031   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:37.859260   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.061310   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:38.062454   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:38.347017   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.348576   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:38.568045   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:38.568527   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:38.847346   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:38.847778   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.062899   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:39.066484   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:39.346397   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:39.346749   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.563053   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:39.563400   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:39.846700   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:39.846831   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:40.066776   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:40.066838   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:40.346181   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:40.347404   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:40.562032   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:40.562648   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:40.847639   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:40.848213   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.061922   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:41.062318   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:41.347894   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:41.348190   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.562110   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:41.563014   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:41.846934   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:41.847429   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:42.061344   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:42.061372   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:42.346819   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:42.347270   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:42.561297   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:42.562375   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:42.846977   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:42.847464   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:43.061613   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:43.061888   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:43.347559   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:43.347762   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:43.562185   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:43.564224   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:43.845364   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:43.847520   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.061075   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:44.063932   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:44.347754   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:44.348813   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.561397   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:44.563086   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:44.846872   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:44.849076   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.062734   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:45.063040   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:45.348056   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:45.348957   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.561436   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:45.563351   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:45.846992   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:45.847002   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:46.061797   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:46.064393   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:46.355471   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:46.355914   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:46.736577   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:46.750625   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:46.846164   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:46.846259   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.060652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:47.062397   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:47.347970   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.348275   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:47.560632   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:47.562212   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:47.847049   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:47.847397   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.063829   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:48.064671   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:48.349173   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.349838   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 10:22:48.562449   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:48.563086   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:48.846730   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:48.847202   11941 kapi.go:107] duration metric: took 39.505356388s to wait for kubernetes.io/minikube-addons=registry ...
	I0812 10:22:49.061058   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:49.063005   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:49.346247   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:49.561711   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:49.561978   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:49.846927   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:50.061747   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:50.061805   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:50.345842   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:50.561573   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:50.563422   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:50.845738   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:51.060794   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:51.062051   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:51.345847   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:51.562844   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:51.564043   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:51.845860   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:52.062577   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:52.062731   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:52.345513   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:52.560692   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:52.562193   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:52.848827   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:53.061166   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:53.061578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:53.346606   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:53.561675   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:53.563355   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:53.846689   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:54.061178   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:54.062280   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:54.347560   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:54.561186   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:54.562938   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:54.845957   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:55.062108   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:55.062439   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:55.348421   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:55.562038   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:55.563972   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:55.846165   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:56.061578   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:56.062385   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:56.346454   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:56.561952   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:56.562584   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:56.846272   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:57.064041   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:57.066487   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:57.346271   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:57.561349   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:57.562453   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:57.845210   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:58.061412   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:58.064647   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:58.346400   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:58.561638   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:58.562778   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:58.845900   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:59.061794   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:59.062760   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:59.345948   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:22:59.561438   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:22:59.563343   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:22:59.846721   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:00.062321   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:00.062886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:00.345907   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:00.562238   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:00.562887   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:00.846234   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:01.073719   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:01.074360   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:01.745978   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:01.746699   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:01.746881   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:01.845952   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:02.062966   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:02.063855   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:02.346218   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:02.567994   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:02.568035   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:02.845458   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:03.060902   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:03.062881   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:03.348077   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:03.561927   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:03.562492   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:03.855592   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:04.061772   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:04.062736   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:04.348857   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:04.565096   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:04.566929   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:04.846092   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:05.061544   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:05.065198   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:05.347449   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:05.561552   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:05.564143   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:05.845708   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:06.060851   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:06.062384   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:06.346256   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:06.562886   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:06.563717   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:06.845433   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:07.060763   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:07.062837   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:07.346643   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:07.561325   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:07.561401   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:07.846121   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:08.062729   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:08.062890   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:08.346916   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:08.561574   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:08.561919   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:08.846702   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:09.061896   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:09.062809   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:09.346018   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:09.561345   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:09.563379   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.199450   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.200298   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:10.210938   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:10.347763   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:10.562175   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:10.562714   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:10.846342   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:11.061869   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:11.062697   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:11.345762   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:11.561346   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:11.561386   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:11.853306   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:12.068991   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:12.069938   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:12.346460   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:12.565161   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:12.565354   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.042819   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:13.071072   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.073339   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:13.346640   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:13.561434   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:13.562984   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:13.846494   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:14.061133   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:14.062699   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:14.347063   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:14.561804   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:14.562053   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:14.845831   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:15.061408   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:15.063592   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:15.347271   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:15.560686   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:15.562510   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:15.848051   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:16.061445   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:16.061986   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:16.346049   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:16.561316   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:16.561741   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:16.845463   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:17.061841   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:17.062745   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:17.345745   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:17.561903   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:17.563841   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:17.846290   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:18.062696   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:18.063147   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:18.346428   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:18.561284   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:18.561809   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:18.846446   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:19.062022   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:19.063197   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:19.346166   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:19.561888   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:19.561955   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:19.845868   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:20.062681   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:20.067139   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:20.711567   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:20.723248   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:20.728227   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:20.846011   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:21.062068   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:21.062099   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:21.345877   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:21.561017   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:21.562665   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:21.846425   11941 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 10:23:22.061624   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:22.063328   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:22.347881   11941 kapi.go:107] duration metric: took 1m13.006433112s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0812 10:23:22.561453   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:22.562808   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:23.061265   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:23.062653   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:23.561621   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:23.562980   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:24.061413   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:24.063026   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:24.561575   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:24.562770   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:25.061652   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:25.064117   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:25.561513   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:25.566692   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:26.060844   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:26.062902   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:26.561717   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:26.562306   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:27.062852   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:27.063738   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 10:23:27.560860   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:27.563808   11941 kapi.go:107] duration metric: took 1m15.005390738s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0812 10:23:27.565891   11941 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-883541 cluster.
	I0812 10:23:27.567522   11941 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0812 10:23:27.568838   11941 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0812 10:23:28.061160   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:28.561298   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:29.061527   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:29.562630   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:30.067587   11941 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 10:23:30.561188   11941 kapi.go:107] duration metric: took 1m19.505418908s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0812 10:23:30.563301   11941 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, helm-tiller, metrics-server, nvidia-device-plugin, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0812 10:23:30.564753   11941 addons.go:510] duration metric: took 1m29.985189619s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner storage-provisioner-rancher inspektor-gadget helm-tiller metrics-server nvidia-device-plugin yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0812 10:23:30.564797   11941 start.go:246] waiting for cluster config update ...
	I0812 10:23:30.564818   11941 start.go:255] writing updated cluster config ...
	I0812 10:23:30.565090   11941 ssh_runner.go:195] Run: rm -f paused
	I0812 10:23:30.616813   11941 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 10:23:30.619131   11941 out.go:177] * Done! kubectl is now configured to use "addons-883541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.989916223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4757bd3-ec5e-49ef-8046-f704119d591f name=/runtime.v1.RuntimeService/Version
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.992419026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cee95242-ac0f-4c6a-bbe8-eb76c74d32b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.993963092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458579993906553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cee95242-ac0f-4c6a-bbe8-eb76c74d32b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.994891279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6be251df-2481-4217-ac82-fa46a371685c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.994963077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6be251df-2481-4217-ac82-fa46a371685c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:39 addons-883541 crio[684]: time="2024-08-12 10:29:39.995327463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da55
6f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17234581017355
93446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6be251df-2481-4217-ac82-fa46a371685c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.033657388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b20d8c05-4a5e-459d-b3f2-0a7936d77943 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.033743872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b20d8c05-4a5e-459d-b3f2-0a7936d77943 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.034953671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a67f88a-f233-4fe4-b64c-1b4b65150577 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.036421281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458580036365960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a67f88a-f233-4fe4-b64c-1b4b65150577 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.037198408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cc4258d-bcd2-4ec7-ad79-5ffa159187f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.037258459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cc4258d-bcd2-4ec7-ad79-5ffa159187f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.037493943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da55
6f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17234581017355
93446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cc4258d-bcd2-4ec7-ad79-5ffa159187f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.059438921Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5a30b98a-2f0f-4928-8deb-0c2900e93597 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.059719598Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-rbqvk,Uid:653f616f-3126-4077-84a6-1add780ba5b3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458430359410112,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:27:10.046143323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&PodSandboxMetadata{Name:nginx,Uid:ad4b39e3-5426-4eb3-96c3-66ba2085da60,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1723458288729251038,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:24:48.420459855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:8bcf6cfa-5273-4a43-a187-d7fac51893ef,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458211215676266,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a187-d7fac51893ef,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:23:30.907306953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6ef93ba18dca3e036
533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-j7r9p,Uid:64cd8192-55f2-4d23-8337-068eddc6126c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458126620817149,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:22:06.306832976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54a9610b-ab55-47f3-943c-2c6f54430fdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458126463665081,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T10:22:05.959330765Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgg6r,Uid:d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458121221111439,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:22:00.905449853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&PodSandboxMetadata{Name:kube-proxy-dswsl,Uid:73a29712-f2b7-4371-a3f3-9920d0a4bde5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458120661183559,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:21:59.742132868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-883541,Uid:8d649d7b2d642d21f3eb3783c3e20669,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458101205132779,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d649d7b2d642d21f3eb3783c3e20669,kubernetes.io/config.seen: 2024-08-12T10:21:40.751628732Z,kuber
netes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-883541,Uid:8efd8a4514a2fd8fc9c6abdbc4414d5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458101203479535,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.215:8443,kubernetes.io/config.hash: 8efd8a4514a2fd8fc9c6abdbc4414d5a,kubernetes.io/config.seen: 2024-08-12T10:21:40.751627796Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&PodSandboxMetadata{Name:etcd-addons-883541,Uid:68c8cf3fc0ab47256c37c9beede9f9
b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458101202623316,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.215:2379,kubernetes.io/config.hash: 68c8cf3fc0ab47256c37c9beede9f9b8,kubernetes.io/config.seen: 2024-08-12T10:21:40.751626275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-883541,Uid:655c07d40b75cac802ca567e9e976c83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723458101201641248,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 655c07d40b75cac802ca567e9e976c83,kubernetes.io/config.seen: 2024-08-12T10:21:40.751622671Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5a30b98a-2f0f-4928-8deb-0c2900e93597 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.060404084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=565e487b-68a1-4d43-884f-1df9d856bb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.060465523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=565e487b-68a1-4d43-884f-1df9d856bb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.060718179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da55
6f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17234581017355
93446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=565e487b-68a1-4d43-884f-1df9d856bb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.072539022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3130dc84-745b-4bca-81be-90ef11a53e00 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.072631085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3130dc84-745b-4bca-81be-90ef11a53e00 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.073915229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a031fe76-c82d-41c4-a42d-6bb4e94ac7fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.075472827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723458580075400838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a031fe76-c82d-41c4-a42d-6bb4e94ac7fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.075987435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6a7e85b-1680-46d1-8d0c-e606b2356b37 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.076092620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6a7e85b-1680-46d1-8d0c-e606b2356b37 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:29:40 addons-883541 crio[684]: time="2024-08-12 10:29:40.076334624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11a5e064e6cb5a1506aca8acabd38bef0a0c8f9ce761328a6978e9705147e2bc,PodSandboxId:f88ffbef4425c7b68c8ce796b3f6985b7cfc7e4b4bba6bf32b0aadf0356af0d5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723458432864709751,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-rbqvk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 653f616f-3126-4077-84a6-1add780ba5b3,},Annotations:map[string]string{io.kubernetes.container.hash: 633329ed,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a10f35492f5cf69d9e3d9a97fc1254fba649c3ce5b9e138cce8ff4e202a8ac,PodSandboxId:39f1924da0538a1efb355efbab90692f11350595d0a1ca5f8529afa85860cc5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723458292525735369,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad4b39e3-5426-4eb3-96c3-66ba2085da60,},Annotations:map[string]string{io.kubernet
es.container.hash: dcd87315,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28748b211808d0709d1f8d92b1f27773ea3e7c2aa8b891ce2f9b1e71fb82781,PodSandboxId:473e8b06f929f1dee0bcfe74fb75299b8b7ee2084a2598667c47571a6f03b0a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723458214211710769,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bcf6cfa-5273-4a43-a
187-d7fac51893ef,},Annotations:map[string]string{io.kubernetes.container.hash: 3191fc01,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948043f97f132945e4b3f1203d2103f1cb7954af6fbef5b0c9d2be70fb5f25e0,PodSandboxId:f6ef93ba18dca3e036533fab374e89a25913fa5692a2f59ccc6ad03e2ac448ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723458150806549828,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j7r9p,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 64cd8192-55f2-4d23-8337-068eddc6126c,},Annotations:map[string]string{io.kubernetes.container.hash: 335f9a8a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af,PodSandboxId:c4d16467ed2c0bf103a5438825194251e9352ccc10209c08bf2d925151566c42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723458128599638135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a9610b-ab55-47f3-943c-2c6f54430fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c281eae3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76,PodSandboxId:6a9282e009c9846e999a3cfaf8dccbe0ae59b7f603878cfa270f32b1866416da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723458123998570399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-vgg6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d3a2bf-c74b-4317-96a2-2a4917a45e7e,},Annotations:map[string]string{io.kubernetes.container.hash: ee22ffcf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338,PodSandboxId:82000d53fdd3a4f5136af28e965de87096c1aeeb8060c7b06481036ad3ff997e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723458121364462838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dswsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a29712-f2b7-4371-a3f3-9920d0a4bde5,},Annotations:map[string]string{io.kubernetes.container.hash: 395cea0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6,PodSandboxId:455c618e0b16cbd656bc658a3a6b6c2c37a0508c63211e565a13c4e4ce7bd7eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da55
6f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723458101746636633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efd8a4514a2fd8fc9c6abdbc4414d5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d494097,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5,PodSandboxId:e9fce8d5745ee9d6d810921efa27df4c47ce542d7d65ae02c701e3d058690df1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1723458101741253207,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c8cf3fc0ab47256c37c9beede9f9b8,},Annotations:map[string]string{io.kubernetes.container.hash: bf804fc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d,PodSandboxId:ffeadcfa0d6a46d0c46f473ea5d6d2d78ed4b95842950e464ad37b250dc6b776,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17234581017355
93446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 655c07d40b75cac802ca567e9e976c83,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857,PodSandboxId:0c6ac3b7f06ebf22043ca89766a5f33a52ebc5a4db77ac3ee21e8c3d3af93b8f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723458101541980223,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-883541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d649d7b2d642d21f3eb3783c3e20669,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6a7e85b-1680-46d1-8d0c-e606b2356b37 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11a5e064e6cb5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f88ffbef4425c       hello-world-app-6778b5fc9f-rbqvk
	71a10f35492f5       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   39f1924da0538       nginx
	d28748b211808       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   473e8b06f929f       busybox
	948043f97f132       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   f6ef93ba18dca       metrics-server-c59844bb4-j7r9p
	982e871e7b916       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   c4d16467ed2c0       storage-provisioner
	2533dff57ccee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   6a9282e009c98       coredns-7db6d8ff4d-vgg6r
	30b643ecfade5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   82000d53fdd3a       kube-proxy-dswsl
	10ae02c068a5b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   455c618e0b16c       kube-apiserver-addons-883541
	deaf7b141796f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   e9fce8d5745ee       etcd-addons-883541
	beb7de3bda570       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   ffeadcfa0d6a4       kube-scheduler-addons-883541
	e2fedb989f755       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   0c6ac3b7f06eb       kube-controller-manager-addons-883541
	
	
	==> coredns [2533dff57ccee88c91a15d29ba02da9eaa18295973699b8bf3459734209c0a76] <==
	[INFO] 10.244.0.8:37449 - 50678 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000181869s
	[INFO] 10.244.0.8:45344 - 12007 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180694s
	[INFO] 10.244.0.8:45344 - 25056 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105929s
	[INFO] 10.244.0.8:56326 - 35353 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010087s
	[INFO] 10.244.0.8:56326 - 15463 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000153691s
	[INFO] 10.244.0.8:59153 - 32412 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00020651s
	[INFO] 10.244.0.8:59153 - 39581 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114799s
	[INFO] 10.244.0.8:60170 - 865 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000189236s
	[INFO] 10.244.0.8:60170 - 59747 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00036836s
	[INFO] 10.244.0.8:60979 - 30783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000057422s
	[INFO] 10.244.0.8:60979 - 40242 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000148391s
	[INFO] 10.244.0.8:57768 - 44717 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067885s
	[INFO] 10.244.0.8:57768 - 27054 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025757s
	[INFO] 10.244.0.8:59145 - 983 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083901s
	[INFO] 10.244.0.8:59145 - 6869 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079095s
	[INFO] 10.244.0.22:45813 - 15159 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000366885s
	[INFO] 10.244.0.22:48779 - 22127 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000111368s
	[INFO] 10.244.0.22:44414 - 10326 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177358s
	[INFO] 10.244.0.22:43327 - 52036 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098477s
	[INFO] 10.244.0.22:37897 - 24068 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073768s
	[INFO] 10.244.0.22:37744 - 20700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076627s
	[INFO] 10.244.0.22:52089 - 40911 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00059563s
	[INFO] 10.244.0.22:54749 - 16773 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000363019s
	[INFO] 10.244.0.26:58622 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321352s
	[INFO] 10.244.0.26:39150 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000235232s
	
	
	==> describe nodes <==
	Name:               addons-883541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-883541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=addons-883541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_21_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-883541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:21:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-883541
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:29:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:27:23 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:27:23 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:27:23 +0000   Mon, 12 Aug 2024 10:21:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:27:23 +0000   Mon, 12 Aug 2024 10:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-883541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 84cd9f99e87c4addbf07374676c6a3d9
	  System UUID:                84cd9f99-e87c-4add-bf07-374676c6a3d9
	  Boot ID:                    4f64d5e5-194e-41c4-b20c-ff2d6cdb7b8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  default                     hello-world-app-6778b5fc9f-rbqvk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 coredns-7db6d8ff4d-vgg6r                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m40s
	  kube-system                 etcd-addons-883541                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m53s
	  kube-system                 kube-apiserver-addons-883541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 kube-controller-manager-addons-883541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-proxy-dswsl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-scheduler-addons-883541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 metrics-server-c59844bb4-j7r9p           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m34s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 7m38s            kube-proxy       
	  Normal  NodeHasSufficientMemory  8m (x8 over 8m)  kubelet          Node addons-883541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m (x8 over 8m)  kubelet          Node addons-883541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m (x7 over 8m)  kubelet          Node addons-883541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m               kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m53s            kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m53s            kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m53s            kubelet          Node addons-883541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s            kubelet          Node addons-883541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s            kubelet          Node addons-883541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m52s            kubelet          Node addons-883541 status is now: NodeReady
	  Normal  RegisteredNode           7m41s            node-controller  Node addons-883541 event: Registered Node addons-883541 in Controller
	
	
	==> dmesg <==
	[ +18.144890] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.121303] kauditd_printk_skb: 32 callbacks suppressed
	[Aug12 10:23] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.201861] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.347883] kauditd_printk_skb: 60 callbacks suppressed
	[  +8.376878] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.217612] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.058565] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.943992] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.920449] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.900454] kauditd_printk_skb: 15 callbacks suppressed
	[Aug12 10:24] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.293661] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.147476] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.253103] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.298744] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.004786] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.464795] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.105016] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.709935] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.010905] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.865860] kauditd_printk_skb: 6 callbacks suppressed
	[Aug12 10:25] kauditd_printk_skb: 33 callbacks suppressed
	[Aug12 10:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.250250] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [deaf7b141796f275e1a142cafc880eef9e923e65a4144a16e3273e2505a5f1d5] <==
	{"level":"info","ts":"2024-08-12T10:23:13.024214Z","caller":"traceutil/trace.go:171","msg":"trace[526650710] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1099; }","duration":"189.259235ms","start":"2024-08-12T10:23:12.834949Z","end":"2024-08-12T10:23:13.024208Z","steps":["trace[526650710] 'agreement among raft nodes before linearized reading'  (duration: 189.15869ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:17.507614Z","caller":"traceutil/trace.go:171","msg":"trace[158605441] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"131.198666ms","start":"2024-08-12T10:23:17.376399Z","end":"2024-08-12T10:23:17.507598Z","steps":["trace[158605441] 'process raft request'  (duration: 131.093214ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:20.694861Z","caller":"traceutil/trace.go:171","msg":"trace[894815587] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"428.608268ms","start":"2024-08-12T10:23:20.266236Z","end":"2024-08-12T10:23:20.694844Z","steps":["trace[894815587] 'process raft request'  (duration: 428.358865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.694973Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:20.266221Z","time spent":"428.69643ms","remote":"127.0.0.1:56106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1147 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-12T10:23:20.695244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.816749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-12T10:23:20.695348Z","caller":"traceutil/trace.go:171","msg":"trace[1974372132] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1151; }","duration":"363.931063ms","start":"2024-08-12T10:23:20.331407Z","end":"2024-08-12T10:23:20.695338Z","steps":["trace[1974372132] 'agreement among raft nodes before linearized reading'  (duration: 363.680403ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.695963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:20.331393Z","time spent":"364.553197ms","remote":"127.0.0.1:56124","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-12T10:23:20.695281Z","caller":"traceutil/trace.go:171","msg":"trace[523090185] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"363.465858ms","start":"2024-08-12T10:23:20.331427Z","end":"2024-08-12T10:23:20.694892Z","steps":["trace[523090185] 'read index received'  (duration: 363.457598ms)","trace[523090185] 'applied index is now lower than readState.Index'  (duration: 6.803µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T10:23:20.701725Z","caller":"traceutil/trace.go:171","msg":"trace[829008344] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"109.640284ms","start":"2024-08-12T10:23:20.592072Z","end":"2024-08-12T10:23:20.701712Z","steps":["trace[829008344] 'process raft request'  (duration: 109.478577ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.701841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.860245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-08-12T10:23:20.70188Z","caller":"traceutil/trace.go:171","msg":"trace[1064386221] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1152; }","duration":"153.897536ms","start":"2024-08-12T10:23:20.547975Z","end":"2024-08-12T10:23:20.701872Z","steps":["trace[1064386221] 'agreement among raft nodes before linearized reading'  (duration: 153.828417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:20.701787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.859955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-12T10:23:20.702348Z","caller":"traceutil/trace.go:171","msg":"trace[1306847445] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1152; }","duration":"156.44611ms","start":"2024-08-12T10:23:20.545893Z","end":"2024-08-12T10:23:20.702339Z","steps":["trace[1306847445] 'agreement among raft nodes before linearized reading'  (duration: 155.744827ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:23:47.193431Z","caller":"traceutil/trace.go:171","msg":"trace[476845149] linearizableReadLoop","detail":"{readStateIndex:1352; appliedIndex:1351; }","duration":"185.181706ms","start":"2024-08-12T10:23:47.008207Z","end":"2024-08-12T10:23:47.193389Z","steps":["trace[476845149] 'read index received'  (duration: 185.001256ms)","trace[476845149] 'applied index is now lower than readState.Index'  (duration: 179.543µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T10:23:47.193565Z","caller":"traceutil/trace.go:171","msg":"trace[1877012934] transaction","detail":"{read_only:false; response_revision:1307; number_of_response:1; }","duration":"356.023905ms","start":"2024-08-12T10:23:46.837525Z","end":"2024-08-12T10:23:47.193548Z","steps":["trace[1877012934] 'process raft request'  (duration: 355.739703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.95883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T10:23:47.193719Z","caller":"traceutil/trace.go:171","msg":"trace[1807069227] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1307; }","duration":"109.04786ms","start":"2024-08-12T10:23:47.084662Z","end":"2024-08-12T10:23:47.19371Z","steps":["trace[1807069227] 'agreement among raft nodes before linearized reading'  (duration: 108.961767ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.581372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-12T10:23:47.193815Z","caller":"traceutil/trace.go:171","msg":"trace[872508521] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1307; }","duration":"185.633208ms","start":"2024-08-12T10:23:47.008175Z","end":"2024-08-12T10:23:47.193808Z","steps":["trace[872508521] 'agreement among raft nodes before linearized reading'  (duration: 185.576449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T10:23:47.193719Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T10:23:46.837508Z","time spent":"356.080043ms","remote":"127.0.0.1:56106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1300 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-12T10:24:34.816677Z","caller":"traceutil/trace.go:171","msg":"trace[453538121] transaction","detail":"{read_only:false; response_revision:1631; number_of_response:1; }","duration":"136.502037ms","start":"2024-08-12T10:24:34.680108Z","end":"2024-08-12T10:24:34.81661Z","steps":["trace[453538121] 'process raft request'  (duration: 136.106999ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:25:05.80678Z","caller":"traceutil/trace.go:171","msg":"trace[1928305594] linearizableReadLoop","detail":"{readStateIndex:1970; appliedIndex:1969; }","duration":"163.064295ms","start":"2024-08-12T10:25:05.643702Z","end":"2024-08-12T10:25:05.806766Z","steps":["trace[1928305594] 'read index received'  (duration: 162.940344ms)","trace[1928305594] 'applied index is now lower than readState.Index'  (duration: 123.483µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T10:25:05.806939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.199762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-snapshotter\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T10:25:05.806966Z","caller":"traceutil/trace.go:171","msg":"trace[253925166] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-snapshotter; range_end:; response_count:0; response_revision:1901; }","duration":"163.283685ms","start":"2024-08-12T10:25:05.643676Z","end":"2024-08-12T10:25:05.806959Z","steps":["trace[253925166] 'agreement among raft nodes before linearized reading'  (duration: 163.166647ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:25:05.807259Z","caller":"traceutil/trace.go:171","msg":"trace[265932206] transaction","detail":"{read_only:false; response_revision:1901; number_of_response:1; }","duration":"190.510699ms","start":"2024-08-12T10:25:05.616735Z","end":"2024-08-12T10:25:05.807246Z","steps":["trace[265932206] 'process raft request'  (duration: 189.948942ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:29:40 up 8 min,  0 users,  load average: 0.18, 0.66, 0.50
	Linux addons-883541 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [10ae02c068a5bb6b146f8c8c2ccfe4d8ce5dbd6d02c20d2f8062b8cbbe797ee6] <==
	I0812 10:23:36.259655       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0812 10:23:41.106405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:59544: use of closed network connection
	E0812 10:23:41.293982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:59578: use of closed network connection
	E0812 10:24:15.983870       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.215:8443->10.244.0.28:35508: read: connection reset by peer
	E0812 10:24:17.577121       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0812 10:24:25.200927       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.224.51"}
	I0812 10:24:39.037308       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0812 10:24:48.274151       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0812 10:24:48.459708       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.219.239"}
	I0812 10:24:51.935980       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0812 10:24:53.002364       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0812 10:25:08.404299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.404351       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.429517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.429578       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.466949       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.467058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.474271       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.474320       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0812 10:25:08.489264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0812 10:25:08.489307       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0812 10:25:09.475201       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0812 10:25:09.489327       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0812 10:25:09.501283       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0812 10:27:10.212584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.240.219"}
	
	
	==> kube-controller-manager [e2fedb989f75580f757be1c8fd5a50c51e7d45a6bf7c70a0dbde116afe620857] <==
	I0812 10:27:12.100490       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0812 10:27:13.489614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.296214ms"
	I0812 10:27:13.489699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="33.291µs"
	I0812 10:27:22.248436       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0812 10:27:31.307894       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:27:31.308090       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:27:46.348651       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:27:46.348731       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:27:59.082383       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:27:59.082527       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:28:07.105134       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:28:07.105316       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:28:15.364354       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:28:15.364561       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:28:24.594282       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:28:24.594372       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:28:48.989983       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:28:48.990224       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:29:06.986625       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:29:06.986778       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:29:11.589795       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:29:11.589852       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0812 10:29:12.532894       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 10:29:12.532950       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0812 10:29:39.054525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="12.59µs"
	
	
	==> kube-proxy [30b643ecfade534d90ab374bb964b9b66487428972249222973c6987d2a56338] <==
	I0812 10:22:02.161935       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:22:02.178980       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	I0812 10:22:02.273619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:22:02.273655       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:22:02.273671       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:22:02.278628       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:22:02.278819       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:22:02.278837       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:22:02.280390       1 config.go:192] "Starting service config controller"
	I0812 10:22:02.280400       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:22:02.280422       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:22:02.280426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:22:02.280766       1 config.go:319] "Starting node config controller"
	I0812 10:22:02.280772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:22:02.381501       1 shared_informer.go:320] Caches are synced for node config
	I0812 10:22:02.381561       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:22:02.381580       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [beb7de3bda5707474a51e384e0fa9753d21d19913f168d48b1622e8295eb9d1d] <==
	W0812 10:21:44.468455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:21:44.468480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 10:21:44.468461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:21:44.468495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:21:44.468529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:21:44.468558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:21:44.468686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:44.468755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.301708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 10:21:45.301756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 10:21:45.332150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:45.332195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.598236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:21:45.598413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:21:45.619729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 10:21:45.620565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 10:21:45.635218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 10:21:45.636126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 10:21:45.731093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:21:45.731243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:21:45.777087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:21:45.778097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:21:45.974718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 10:21:45.974809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 10:21:47.958278       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.476091    1260 scope.go:117] "RemoveContainer" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.501958    1260 scope.go:117] "RemoveContainer" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: E0812 10:27:15.502846    1260 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": container with ID starting with 5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e not found: ID does not exist" containerID="5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"
	Aug 12 10:27:15 addons-883541 kubelet[1260]: I0812 10:27:15.502892    1260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e"} err="failed to get container status \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": rpc error: code = NotFound desc = could not find container \"5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e\": container with ID starting with 5044a57a2dcad3544ed6e37106be93f534e10cbf5cdeba246548340fa7d5f87e not found: ID does not exist"
	Aug 12 10:27:17 addons-883541 kubelet[1260]: I0812 10:27:17.157801    1260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8c0719-79ca-42a6-ab6f-88e8e6a528b7" path="/var/lib/kubelet/pods/cb8c0719-79ca-42a6-ab6f-88e8e6a528b7/volumes"
	Aug 12 10:27:47 addons-883541 kubelet[1260]: E0812 10:27:47.174466    1260 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:27:47 addons-883541 kubelet[1260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:27:47 addons-883541 kubelet[1260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:27:47 addons-883541 kubelet[1260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:27:47 addons-883541 kubelet[1260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:27:47 addons-883541 kubelet[1260]: I0812 10:27:47.883865    1260 scope.go:117] "RemoveContainer" containerID="eac240509e2e5138f4753e7babec07cc3437d645991f345ed566685b6351c2d6"
	Aug 12 10:27:47 addons-883541 kubelet[1260]: I0812 10:27:47.905630    1260 scope.go:117] "RemoveContainer" containerID="3edc3a24ab1916ea64bfc0fdb218a5b2c79f719140a4b1221dd0e0c45008fd7b"
	Aug 12 10:28:33 addons-883541 kubelet[1260]: I0812 10:28:33.152675    1260 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 10:28:47 addons-883541 kubelet[1260]: E0812 10:28:47.176090    1260 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:28:47 addons-883541 kubelet[1260]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:28:47 addons-883541 kubelet[1260]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:28:47 addons-883541 kubelet[1260]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:28:47 addons-883541 kubelet[1260]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:29:39 addons-883541 kubelet[1260]: I0812 10:29:39.084256    1260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-rbqvk" podStartSLOduration=146.85722923 podStartE2EDuration="2m29.084218511s" podCreationTimestamp="2024-08-12 10:27:10 +0000 UTC" firstStartedPulling="2024-08-12 10:27:10.621872429 +0000 UTC m=+323.600624390" lastFinishedPulling="2024-08-12 10:27:12.848861707 +0000 UTC m=+325.827613671" observedRunningTime="2024-08-12 10:27:13.479465868 +0000 UTC m=+326.458217849" watchObservedRunningTime="2024-08-12 10:29:39.084218511 +0000 UTC m=+472.062970492"
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.510867    1260 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/64cd8192-55f2-4d23-8337-068eddc6126c-tmp-dir\") pod \"64cd8192-55f2-4d23-8337-068eddc6126c\" (UID: \"64cd8192-55f2-4d23-8337-068eddc6126c\") "
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.510922    1260 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hk2j\" (UniqueName: \"kubernetes.io/projected/64cd8192-55f2-4d23-8337-068eddc6126c-kube-api-access-5hk2j\") pod \"64cd8192-55f2-4d23-8337-068eddc6126c\" (UID: \"64cd8192-55f2-4d23-8337-068eddc6126c\") "
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.512212    1260 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64cd8192-55f2-4d23-8337-068eddc6126c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "64cd8192-55f2-4d23-8337-068eddc6126c" (UID: "64cd8192-55f2-4d23-8337-068eddc6126c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.521985    1260 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64cd8192-55f2-4d23-8337-068eddc6126c-kube-api-access-5hk2j" (OuterVolumeSpecName: "kube-api-access-5hk2j") pod "64cd8192-55f2-4d23-8337-068eddc6126c" (UID: "64cd8192-55f2-4d23-8337-068eddc6126c"). InnerVolumeSpecName "kube-api-access-5hk2j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.611833    1260 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/64cd8192-55f2-4d23-8337-068eddc6126c-tmp-dir\") on node \"addons-883541\" DevicePath \"\""
	Aug 12 10:29:40 addons-883541 kubelet[1260]: I0812 10:29:40.611865    1260 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5hk2j\" (UniqueName: \"kubernetes.io/projected/64cd8192-55f2-4d23-8337-068eddc6126c-kube-api-access-5hk2j\") on node \"addons-883541\" DevicePath \"\""
	
	
	==> storage-provisioner [982e871e7b916db2660344775e283b55cdd6cbdeb7e68ef1ef253e80744917af] <==
	I0812 10:22:09.384105       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 10:22:09.421912       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 10:22:09.421959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 10:22:09.439730       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 10:22:09.440343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"04becd8f-d2b0-4a27-8098-732cb8ea640c", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87 became leader
	I0812 10:22:09.440395       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87!
	I0812 10:22:09.541253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-883541_431f8ced-34f4-43e5-a48a-4c9b94d51b87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-883541 -n addons-883541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-883541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (334.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-883541
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-883541: exit status 82 (2m0.466483727s)

                                                
                                                
-- stdout --
	* Stopping node "addons-883541"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-883541" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-883541
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-883541: exit status 11 (21.532666161s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-883541" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-883541
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-883541: exit status 11 (6.144373873s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-883541" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-883541
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-883541: exit status 11 (6.144215981s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-883541" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 node stop m02 -v=7 --alsologtostderr
E0812 10:42:07.858828   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.483226056s)

                                                
                                                
-- stdout --
	* Stopping node "ha-919901-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:41:28.262521   26264 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:41:28.262684   26264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:41:28.262695   26264 out.go:304] Setting ErrFile to fd 2...
	I0812 10:41:28.262699   26264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:41:28.262936   26264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:41:28.263228   26264 mustload.go:65] Loading cluster: ha-919901
	I0812 10:41:28.263684   26264 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:41:28.263704   26264 stop.go:39] StopHost: ha-919901-m02
	I0812 10:41:28.264094   26264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:41:28.264133   26264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:41:28.281064   26264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I0812 10:41:28.281543   26264 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:41:28.282087   26264 main.go:141] libmachine: Using API Version  1
	I0812 10:41:28.282108   26264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:41:28.282513   26264 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:41:28.285029   26264 out.go:177] * Stopping node "ha-919901-m02"  ...
	I0812 10:41:28.286508   26264 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 10:41:28.286546   26264 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:41:28.286859   26264 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 10:41:28.286907   26264 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:41:28.289984   26264 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:41:28.290525   26264 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:41:28.290552   26264 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:41:28.290741   26264 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:41:28.290946   26264 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:41:28.291111   26264 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:41:28.291265   26264 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:41:28.372980   26264 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 10:41:28.426672   26264 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 10:41:28.482436   26264 main.go:141] libmachine: Stopping "ha-919901-m02"...
	I0812 10:41:28.482467   26264 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:41:28.484108   26264 main.go:141] libmachine: (ha-919901-m02) Calling .Stop
	I0812 10:41:28.487791   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 0/120
	I0812 10:41:29.489181   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 1/120
	I0812 10:41:30.490501   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 2/120
	I0812 10:41:31.491972   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 3/120
	I0812 10:41:32.494090   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 4/120
	I0812 10:41:33.496082   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 5/120
	I0812 10:41:34.497604   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 6/120
	I0812 10:41:35.499642   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 7/120
	I0812 10:41:36.501357   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 8/120
	I0812 10:41:37.503403   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 9/120
	I0812 10:41:38.505755   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 10/120
	I0812 10:41:39.507411   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 11/120
	I0812 10:41:40.508819   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 12/120
	I0812 10:41:41.510166   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 13/120
	I0812 10:41:42.511566   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 14/120
	I0812 10:41:43.513755   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 15/120
	I0812 10:41:44.515433   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 16/120
	I0812 10:41:45.516963   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 17/120
	I0812 10:41:46.518264   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 18/120
	I0812 10:41:47.519866   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 19/120
	I0812 10:41:48.522270   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 20/120
	I0812 10:41:49.523866   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 21/120
	I0812 10:41:50.526356   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 22/120
	I0812 10:41:51.528479   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 23/120
	I0812 10:41:52.529878   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 24/120
	I0812 10:41:53.532043   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 25/120
	I0812 10:41:54.534083   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 26/120
	I0812 10:41:55.535434   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 27/120
	I0812 10:41:56.536709   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 28/120
	I0812 10:41:57.538408   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 29/120
	I0812 10:41:58.540461   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 30/120
	I0812 10:41:59.542933   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 31/120
	I0812 10:42:00.544821   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 32/120
	I0812 10:42:01.546087   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 33/120
	I0812 10:42:02.547493   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 34/120
	I0812 10:42:03.549783   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 35/120
	I0812 10:42:04.551543   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 36/120
	I0812 10:42:05.553264   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 37/120
	I0812 10:42:06.555557   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 38/120
	I0812 10:42:07.557147   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 39/120
	I0812 10:42:08.559671   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 40/120
	I0812 10:42:09.561627   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 41/120
	I0812 10:42:10.564127   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 42/120
	I0812 10:42:11.566040   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 43/120
	I0812 10:42:12.567539   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 44/120
	I0812 10:42:13.569460   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 45/120
	I0812 10:42:14.571226   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 46/120
	I0812 10:42:15.573393   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 47/120
	I0812 10:42:16.575636   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 48/120
	I0812 10:42:17.577557   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 49/120
	I0812 10:42:18.579695   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 50/120
	I0812 10:42:19.581987   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 51/120
	I0812 10:42:20.583876   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 52/120
	I0812 10:42:21.585827   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 53/120
	I0812 10:42:22.588039   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 54/120
	I0812 10:42:23.590361   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 55/120
	I0812 10:42:24.591658   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 56/120
	I0812 10:42:25.593143   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 57/120
	I0812 10:42:26.594602   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 58/120
	I0812 10:42:27.595985   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 59/120
	I0812 10:42:28.597306   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 60/120
	I0812 10:42:29.598717   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 61/120
	I0812 10:42:30.600052   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 62/120
	I0812 10:42:31.601851   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 63/120
	I0812 10:42:32.603804   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 64/120
	I0812 10:42:33.605739   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 65/120
	I0812 10:42:34.607805   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 66/120
	I0812 10:42:35.609335   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 67/120
	I0812 10:42:36.611531   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 68/120
	I0812 10:42:37.613079   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 69/120
	I0812 10:42:38.615359   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 70/120
	I0812 10:42:39.616767   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 71/120
	I0812 10:42:40.618360   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 72/120
	I0812 10:42:41.619719   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 73/120
	I0812 10:42:42.621243   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 74/120
	I0812 10:42:43.623452   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 75/120
	I0812 10:42:44.625235   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 76/120
	I0812 10:42:45.627373   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 77/120
	I0812 10:42:46.629192   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 78/120
	I0812 10:42:47.631646   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 79/120
	I0812 10:42:48.633687   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 80/120
	I0812 10:42:49.635437   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 81/120
	I0812 10:42:50.637365   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 82/120
	I0812 10:42:51.639548   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 83/120
	I0812 10:42:52.641003   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 84/120
	I0812 10:42:53.642855   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 85/120
	I0812 10:42:54.644829   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 86/120
	I0812 10:42:55.646265   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 87/120
	I0812 10:42:56.647562   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 88/120
	I0812 10:42:57.649122   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 89/120
	I0812 10:42:58.651289   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 90/120
	I0812 10:42:59.652745   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 91/120
	I0812 10:43:00.654008   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 92/120
	I0812 10:43:01.655751   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 93/120
	I0812 10:43:02.657182   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 94/120
	I0812 10:43:03.659021   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 95/120
	I0812 10:43:04.660576   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 96/120
	I0812 10:43:05.661959   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 97/120
	I0812 10:43:06.663697   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 98/120
	I0812 10:43:07.664843   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 99/120
	I0812 10:43:08.667052   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 100/120
	I0812 10:43:09.668624   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 101/120
	I0812 10:43:10.670085   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 102/120
	I0812 10:43:11.671405   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 103/120
	I0812 10:43:12.672946   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 104/120
	I0812 10:43:13.675118   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 105/120
	I0812 10:43:14.676522   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 106/120
	I0812 10:43:15.678006   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 107/120
	I0812 10:43:16.679562   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 108/120
	I0812 10:43:17.680932   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 109/120
	I0812 10:43:18.683207   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 110/120
	I0812 10:43:19.684431   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 111/120
	I0812 10:43:20.685857   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 112/120
	I0812 10:43:21.687347   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 113/120
	I0812 10:43:22.689117   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 114/120
	I0812 10:43:23.691034   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 115/120
	I0812 10:43:24.692593   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 116/120
	I0812 10:43:25.694433   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 117/120
	I0812 10:43:26.697144   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 118/120
	I0812 10:43:27.698680   26264 main.go:141] libmachine: (ha-919901-m02) Waiting for machine to stop 119/120
	I0812 10:43:28.699738   26264 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 10:43:28.699884   26264 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-919901 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
E0812 10:43:29.779368   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:43:30.975875   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (19.198687276s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:43:28.744100   26711 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:43:28.744364   26711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:28.744373   26711 out.go:304] Setting ErrFile to fd 2...
	I0812 10:43:28.744377   26711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:28.744573   26711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:43:28.744747   26711 out.go:298] Setting JSON to false
	I0812 10:43:28.744771   26711 mustload.go:65] Loading cluster: ha-919901
	I0812 10:43:28.744808   26711 notify.go:220] Checking for updates...
	I0812 10:43:28.745242   26711 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:43:28.745262   26711 status.go:255] checking status of ha-919901 ...
	I0812 10:43:28.745697   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.745752   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:28.761138   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0812 10:43:28.761623   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:28.762239   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:28.762266   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:28.762659   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:28.762872   26711 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:43:28.764531   26711 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:43:28.764547   26711 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:28.764838   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.764935   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:28.779741   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0812 10:43:28.780197   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:28.780708   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:28.780729   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:28.781080   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:28.781301   26711 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:43:28.783947   26711 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:28.784404   26711 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:28.784439   26711 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:28.784494   26711 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:28.784781   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.784846   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:28.800198   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I0812 10:43:28.800742   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:28.801295   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:28.801320   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:28.801651   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:28.801892   26711 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:43:28.802084   26711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:28.802105   26711 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:43:28.804806   26711 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:28.805369   26711 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:28.805403   26711 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:28.805536   26711 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:43:28.805707   26711 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:43:28.805862   26711 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:43:28.806000   26711 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:43:28.889310   26711 ssh_runner.go:195] Run: systemctl --version
	I0812 10:43:28.897042   26711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:28.914549   26711 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:43:28.914579   26711 api_server.go:166] Checking apiserver status ...
	I0812 10:43:28.914639   26711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:43:28.931023   26711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:43:28.942905   26711 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:43:28.942992   26711 ssh_runner.go:195] Run: ls
	I0812 10:43:28.948178   26711 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:43:28.952676   26711 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:43:28.952701   26711 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:43:28.952711   26711 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:43:28.952736   26711 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:43:28.953068   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.953107   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:28.967865   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0812 10:43:28.968301   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:28.968843   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:28.968892   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:28.969313   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:28.969567   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:43:28.971288   26711 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:43:28.971307   26711 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:28.971588   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.971624   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:28.987038   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0812 10:43:28.987574   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:28.988111   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:28.988140   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:28.988580   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:28.988832   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:43:28.991891   26711 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:28.992310   26711 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:28.992337   26711 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:28.992469   26711 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:28.992779   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:28.992821   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:29.008272   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0812 10:43:29.008814   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:29.009329   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:29.009359   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:29.009682   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:29.009922   26711 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:43:29.010158   26711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:29.010179   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:43:29.013345   26711 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:29.013728   26711 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:29.013759   26711 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:29.013924   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:43:29.014081   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:43:29.014257   26711 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:43:29.014408   26711 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:43:47.521129   26711 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:43:47.521231   26711 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:43:47.521246   26711 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:43:47.521255   26711 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:43:47.521278   26711 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:43:47.521288   26711 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:43:47.521630   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.521673   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.539965   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35661
	I0812 10:43:47.540396   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.540919   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.540956   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.541331   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.541555   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:43:47.543470   26711 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:43:47.543488   26711 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:43:47.543798   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.543841   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.558910   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0812 10:43:47.559424   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.560096   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.560116   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.560412   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.560575   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:43:47.563342   26711 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:47.563769   26711 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:43:47.563791   26711 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:47.563974   26711 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:43:47.564266   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.564318   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.579503   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0812 10:43:47.579933   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.580459   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.580481   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.580822   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.581038   26711 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:43:47.581222   26711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:47.581248   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:43:47.584299   26711 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:47.584766   26711 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:43:47.584791   26711 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:47.584968   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:43:47.585121   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:43:47.585254   26711 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:43:47.585468   26711 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:43:47.674813   26711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:47.696741   26711 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:43:47.696767   26711 api_server.go:166] Checking apiserver status ...
	I0812 10:43:47.696797   26711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:43:47.712955   26711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:43:47.723459   26711 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:43:47.723510   26711 ssh_runner.go:195] Run: ls
	I0812 10:43:47.728678   26711 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:43:47.733548   26711 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:43:47.733573   26711 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:43:47.733586   26711 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:43:47.733603   26711 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:43:47.733914   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.733958   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.748993   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0812 10:43:47.749522   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.750225   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.750253   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.750687   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.750909   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:43:47.752614   26711 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:43:47.752628   26711 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:43:47.752928   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.752961   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.770158   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0812 10:43:47.770668   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.771272   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.771293   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.771772   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.771973   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:43:47.774989   26711 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:47.775477   26711 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:43:47.775519   26711 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:47.775647   26711 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:43:47.775970   26711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:47.776015   26711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:47.792593   26711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I0812 10:43:47.793077   26711 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:47.793683   26711 main.go:141] libmachine: Using API Version  1
	I0812 10:43:47.793705   26711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:47.794017   26711 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:47.794211   26711 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:43:47.794408   26711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:47.794430   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:43:47.797634   26711 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:47.798153   26711 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:43:47.798181   26711 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:47.798510   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:43:47.798663   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:43:47.798831   26711 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:43:47.798959   26711 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:43:47.881766   26711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:47.898551   26711 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-919901 -n ha-919901
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-919901 logs -n 25: (1.404974927s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m03_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m04 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp testdata/cp-test.txt                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m04_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03:/home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m03 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-919901 node stop m02 -v=7                                                     | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:36:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:36:36.258715   22139 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:36:36.258970   22139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:36.258979   22139 out.go:304] Setting ErrFile to fd 2...
	I0812 10:36:36.258983   22139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:36.259142   22139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:36:36.259711   22139 out.go:298] Setting JSON to false
	I0812 10:36:36.260545   22139 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1137,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:36:36.260611   22139 start.go:139] virtualization: kvm guest
	I0812 10:36:36.262778   22139 out.go:177] * [ha-919901] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:36:36.264060   22139 notify.go:220] Checking for updates...
	I0812 10:36:36.264095   22139 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:36:36.265668   22139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:36:36.267193   22139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:36:36.268817   22139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.270270   22139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:36:36.271475   22139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:36:36.272701   22139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:36:36.308466   22139 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 10:36:36.309854   22139 start.go:297] selected driver: kvm2
	I0812 10:36:36.309872   22139 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:36:36.309883   22139 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:36:36.310563   22139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:36:36.310644   22139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:36:36.326403   22139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:36:36.326467   22139 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:36:36.326691   22139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:36:36.326719   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:36:36.326732   22139 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 10:36:36.326740   22139 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 10:36:36.326793   22139 start.go:340] cluster config:
	{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0812 10:36:36.326886   22139 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:36:36.328810   22139 out.go:177] * Starting "ha-919901" primary control-plane node in "ha-919901" cluster
	I0812 10:36:36.330149   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:36:36.330196   22139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:36:36.330206   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:36:36.330283   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:36:36.330293   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:36:36.330604   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:36:36.330623   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json: {Name:mkdd87194089c92fa3aeaf7fe7c90e067b5812a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:36:36.330763   22139 start.go:360] acquireMachinesLock for ha-919901: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:36:36.330790   22139 start.go:364] duration metric: took 14.602µs to acquireMachinesLock for "ha-919901"
	I0812 10:36:36.330805   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:36:36.330860   22139 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 10:36:36.332733   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:36:36.332909   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:36.332965   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:36.347922   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0812 10:36:36.348426   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:36.349005   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:36:36.349040   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:36.349444   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:36.349666   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:36.349842   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:36.350016   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:36:36.350047   22139 client.go:168] LocalClient.Create starting
	I0812 10:36:36.350084   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:36:36.350130   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:36:36.350156   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:36:36.350223   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:36:36.350250   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:36:36.350269   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:36:36.350299   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:36:36.350312   22139 main.go:141] libmachine: (ha-919901) Calling .PreCreateCheck
	I0812 10:36:36.350680   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:36.351097   22139 main.go:141] libmachine: Creating machine...
	I0812 10:36:36.351112   22139 main.go:141] libmachine: (ha-919901) Calling .Create
	I0812 10:36:36.351258   22139 main.go:141] libmachine: (ha-919901) Creating KVM machine...
	I0812 10:36:36.352740   22139 main.go:141] libmachine: (ha-919901) DBG | found existing default KVM network
	I0812 10:36:36.353576   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.353428   22162 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0812 10:36:36.353636   22139 main.go:141] libmachine: (ha-919901) DBG | created network xml: 
	I0812 10:36:36.353659   22139 main.go:141] libmachine: (ha-919901) DBG | <network>
	I0812 10:36:36.353671   22139 main.go:141] libmachine: (ha-919901) DBG |   <name>mk-ha-919901</name>
	I0812 10:36:36.353692   22139 main.go:141] libmachine: (ha-919901) DBG |   <dns enable='no'/>
	I0812 10:36:36.353707   22139 main.go:141] libmachine: (ha-919901) DBG |   
	I0812 10:36:36.353716   22139 main.go:141] libmachine: (ha-919901) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 10:36:36.353725   22139 main.go:141] libmachine: (ha-919901) DBG |     <dhcp>
	I0812 10:36:36.353735   22139 main.go:141] libmachine: (ha-919901) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 10:36:36.353765   22139 main.go:141] libmachine: (ha-919901) DBG |     </dhcp>
	I0812 10:36:36.353788   22139 main.go:141] libmachine: (ha-919901) DBG |   </ip>
	I0812 10:36:36.353796   22139 main.go:141] libmachine: (ha-919901) DBG |   
	I0812 10:36:36.353804   22139 main.go:141] libmachine: (ha-919901) DBG | </network>
	I0812 10:36:36.353827   22139 main.go:141] libmachine: (ha-919901) DBG | 
	I0812 10:36:36.359300   22139 main.go:141] libmachine: (ha-919901) DBG | trying to create private KVM network mk-ha-919901 192.168.39.0/24...
	I0812 10:36:36.426191   22139 main.go:141] libmachine: (ha-919901) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 ...
	I0812 10:36:36.426222   22139 main.go:141] libmachine: (ha-919901) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:36:36.426233   22139 main.go:141] libmachine: (ha-919901) DBG | private KVM network mk-ha-919901 192.168.39.0/24 created
	I0812 10:36:36.426248   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.426140   22162 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.426285   22139 main.go:141] libmachine: (ha-919901) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:36:36.666261   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.666088   22162 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa...
	I0812 10:36:36.725728   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.725612   22162 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/ha-919901.rawdisk...
	I0812 10:36:36.725762   22139 main.go:141] libmachine: (ha-919901) DBG | Writing magic tar header
	I0812 10:36:36.725777   22139 main.go:141] libmachine: (ha-919901) DBG | Writing SSH key tar header
	I0812 10:36:36.725787   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.725738   22162 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 ...
	I0812 10:36:36.725830   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901
	I0812 10:36:36.725902   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 (perms=drwx------)
	I0812 10:36:36.725926   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:36:36.725937   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:36:36.725949   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:36:36.725976   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:36:36.725986   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:36:36.726005   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:36:36.726019   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.726027   22139 main.go:141] libmachine: (ha-919901) Creating domain...
	I0812 10:36:36.726067   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:36:36.726093   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:36:36.726106   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:36:36.726120   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home
	I0812 10:36:36.726143   22139 main.go:141] libmachine: (ha-919901) DBG | Skipping /home - not owner
	I0812 10:36:36.727230   22139 main.go:141] libmachine: (ha-919901) define libvirt domain using xml: 
	I0812 10:36:36.727246   22139 main.go:141] libmachine: (ha-919901) <domain type='kvm'>
	I0812 10:36:36.727255   22139 main.go:141] libmachine: (ha-919901)   <name>ha-919901</name>
	I0812 10:36:36.727263   22139 main.go:141] libmachine: (ha-919901)   <memory unit='MiB'>2200</memory>
	I0812 10:36:36.727271   22139 main.go:141] libmachine: (ha-919901)   <vcpu>2</vcpu>
	I0812 10:36:36.727278   22139 main.go:141] libmachine: (ha-919901)   <features>
	I0812 10:36:36.727290   22139 main.go:141] libmachine: (ha-919901)     <acpi/>
	I0812 10:36:36.727300   22139 main.go:141] libmachine: (ha-919901)     <apic/>
	I0812 10:36:36.727309   22139 main.go:141] libmachine: (ha-919901)     <pae/>
	I0812 10:36:36.727333   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727344   22139 main.go:141] libmachine: (ha-919901)   </features>
	I0812 10:36:36.727355   22139 main.go:141] libmachine: (ha-919901)   <cpu mode='host-passthrough'>
	I0812 10:36:36.727364   22139 main.go:141] libmachine: (ha-919901)   
	I0812 10:36:36.727374   22139 main.go:141] libmachine: (ha-919901)   </cpu>
	I0812 10:36:36.727389   22139 main.go:141] libmachine: (ha-919901)   <os>
	I0812 10:36:36.727401   22139 main.go:141] libmachine: (ha-919901)     <type>hvm</type>
	I0812 10:36:36.727418   22139 main.go:141] libmachine: (ha-919901)     <boot dev='cdrom'/>
	I0812 10:36:36.727430   22139 main.go:141] libmachine: (ha-919901)     <boot dev='hd'/>
	I0812 10:36:36.727438   22139 main.go:141] libmachine: (ha-919901)     <bootmenu enable='no'/>
	I0812 10:36:36.727449   22139 main.go:141] libmachine: (ha-919901)   </os>
	I0812 10:36:36.727460   22139 main.go:141] libmachine: (ha-919901)   <devices>
	I0812 10:36:36.727471   22139 main.go:141] libmachine: (ha-919901)     <disk type='file' device='cdrom'>
	I0812 10:36:36.727490   22139 main.go:141] libmachine: (ha-919901)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/boot2docker.iso'/>
	I0812 10:36:36.727503   22139 main.go:141] libmachine: (ha-919901)       <target dev='hdc' bus='scsi'/>
	I0812 10:36:36.727513   22139 main.go:141] libmachine: (ha-919901)       <readonly/>
	I0812 10:36:36.727530   22139 main.go:141] libmachine: (ha-919901)     </disk>
	I0812 10:36:36.727541   22139 main.go:141] libmachine: (ha-919901)     <disk type='file' device='disk'>
	I0812 10:36:36.727560   22139 main.go:141] libmachine: (ha-919901)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:36:36.727580   22139 main.go:141] libmachine: (ha-919901)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/ha-919901.rawdisk'/>
	I0812 10:36:36.727593   22139 main.go:141] libmachine: (ha-919901)       <target dev='hda' bus='virtio'/>
	I0812 10:36:36.727603   22139 main.go:141] libmachine: (ha-919901)     </disk>
	I0812 10:36:36.727614   22139 main.go:141] libmachine: (ha-919901)     <interface type='network'>
	I0812 10:36:36.727626   22139 main.go:141] libmachine: (ha-919901)       <source network='mk-ha-919901'/>
	I0812 10:36:36.727638   22139 main.go:141] libmachine: (ha-919901)       <model type='virtio'/>
	I0812 10:36:36.727653   22139 main.go:141] libmachine: (ha-919901)     </interface>
	I0812 10:36:36.727664   22139 main.go:141] libmachine: (ha-919901)     <interface type='network'>
	I0812 10:36:36.727672   22139 main.go:141] libmachine: (ha-919901)       <source network='default'/>
	I0812 10:36:36.727681   22139 main.go:141] libmachine: (ha-919901)       <model type='virtio'/>
	I0812 10:36:36.727691   22139 main.go:141] libmachine: (ha-919901)     </interface>
	I0812 10:36:36.727700   22139 main.go:141] libmachine: (ha-919901)     <serial type='pty'>
	I0812 10:36:36.727711   22139 main.go:141] libmachine: (ha-919901)       <target port='0'/>
	I0812 10:36:36.727739   22139 main.go:141] libmachine: (ha-919901)     </serial>
	I0812 10:36:36.727760   22139 main.go:141] libmachine: (ha-919901)     <console type='pty'>
	I0812 10:36:36.727781   22139 main.go:141] libmachine: (ha-919901)       <target type='serial' port='0'/>
	I0812 10:36:36.727798   22139 main.go:141] libmachine: (ha-919901)     </console>
	I0812 10:36:36.727814   22139 main.go:141] libmachine: (ha-919901)     <rng model='virtio'>
	I0812 10:36:36.727831   22139 main.go:141] libmachine: (ha-919901)       <backend model='random'>/dev/random</backend>
	I0812 10:36:36.727844   22139 main.go:141] libmachine: (ha-919901)     </rng>
	I0812 10:36:36.727854   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727873   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727884   22139 main.go:141] libmachine: (ha-919901)   </devices>
	I0812 10:36:36.727893   22139 main.go:141] libmachine: (ha-919901) </domain>
	I0812 10:36:36.727908   22139 main.go:141] libmachine: (ha-919901) 
	I0812 10:36:36.732085   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:d2:76:8c in network default
	I0812 10:36:36.732658   22139 main.go:141] libmachine: (ha-919901) Ensuring networks are active...
	I0812 10:36:36.732688   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:36.733512   22139 main.go:141] libmachine: (ha-919901) Ensuring network default is active
	I0812 10:36:36.733869   22139 main.go:141] libmachine: (ha-919901) Ensuring network mk-ha-919901 is active
	I0812 10:36:36.734468   22139 main.go:141] libmachine: (ha-919901) Getting domain xml...
	I0812 10:36:36.735258   22139 main.go:141] libmachine: (ha-919901) Creating domain...
	I0812 10:36:37.938658   22139 main.go:141] libmachine: (ha-919901) Waiting to get IP...
	I0812 10:36:37.939346   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:37.939776   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:37.939884   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:37.939787   22162 retry.go:31] will retry after 213.094827ms: waiting for machine to come up
	I0812 10:36:38.154220   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.154748   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.154779   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.154699   22162 retry.go:31] will retry after 338.084889ms: waiting for machine to come up
	I0812 10:36:38.493947   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.494320   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.494345   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.494285   22162 retry.go:31] will retry after 473.305282ms: waiting for machine to come up
	I0812 10:36:38.968861   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.969295   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.969328   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.969235   22162 retry.go:31] will retry after 564.539174ms: waiting for machine to come up
	I0812 10:36:39.535098   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:39.535570   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:39.535601   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:39.535526   22162 retry.go:31] will retry after 604.149167ms: waiting for machine to come up
	I0812 10:36:40.141250   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:40.141758   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:40.141782   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:40.141715   22162 retry.go:31] will retry after 943.023048ms: waiting for machine to come up
	I0812 10:36:41.085777   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:41.086112   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:41.086142   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:41.086064   22162 retry.go:31] will retry after 774.228398ms: waiting for machine to come up
	I0812 10:36:41.861586   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:41.862193   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:41.862222   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:41.862139   22162 retry.go:31] will retry after 1.205515582s: waiting for machine to come up
	I0812 10:36:43.069629   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:43.070159   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:43.070186   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:43.070112   22162 retry.go:31] will retry after 1.834177894s: waiting for machine to come up
	I0812 10:36:44.907232   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:44.907755   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:44.907777   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:44.907711   22162 retry.go:31] will retry after 1.903930049s: waiting for machine to come up
	I0812 10:36:46.813730   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:46.814253   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:46.814277   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:46.814216   22162 retry.go:31] will retry after 2.852173088s: waiting for machine to come up
	I0812 10:36:49.670605   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:49.671236   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:49.671259   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:49.671167   22162 retry.go:31] will retry after 3.596494825s: waiting for machine to come up
	I0812 10:36:53.270609   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:53.271187   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:53.271212   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:53.271153   22162 retry.go:31] will retry after 3.244912687s: waiting for machine to come up
	I0812 10:36:56.517582   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.518056   22139 main.go:141] libmachine: (ha-919901) Found IP for machine: 192.168.39.5
	I0812 10:36:56.518072   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has current primary IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.518078   22139 main.go:141] libmachine: (ha-919901) Reserving static IP address...
	I0812 10:36:56.518512   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find host DHCP lease matching {name: "ha-919901", mac: "52:54:00:8b:40:2a", ip: "192.168.39.5"} in network mk-ha-919901
	I0812 10:36:56.598209   22139 main.go:141] libmachine: (ha-919901) DBG | Getting to WaitForSSH function...
	I0812 10:36:56.598245   22139 main.go:141] libmachine: (ha-919901) Reserved static IP address: 192.168.39.5
	I0812 10:36:56.598257   22139 main.go:141] libmachine: (ha-919901) Waiting for SSH to be available...
	I0812 10:36:56.600922   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.601331   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.601360   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.601519   22139 main.go:141] libmachine: (ha-919901) DBG | Using SSH client type: external
	I0812 10:36:56.601532   22139 main.go:141] libmachine: (ha-919901) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa (-rw-------)
	I0812 10:36:56.601557   22139 main.go:141] libmachine: (ha-919901) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:36:56.601582   22139 main.go:141] libmachine: (ha-919901) DBG | About to run SSH command:
	I0812 10:36:56.601595   22139 main.go:141] libmachine: (ha-919901) DBG | exit 0
	I0812 10:36:56.729201   22139 main.go:141] libmachine: (ha-919901) DBG | SSH cmd err, output: <nil>: 
	I0812 10:36:56.729508   22139 main.go:141] libmachine: (ha-919901) KVM machine creation complete!
	I0812 10:36:56.729857   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:56.730394   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:56.730579   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:56.730773   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:36:56.730801   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:36:56.732499   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:36:56.732518   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:36:56.732535   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:36:56.732548   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.735116   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.735464   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.735496   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.735620   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.735833   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.735989   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.736122   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.736287   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.736530   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.736544   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:36:56.844291   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:36:56.844315   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:36:56.844323   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.847109   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.847480   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.847503   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.847673   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.847879   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.848116   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.848257   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.848433   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.848632   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.848647   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:36:56.957579   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:36:56.957674   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:36:56.957688   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:36:56.957698   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:56.957973   22139 buildroot.go:166] provisioning hostname "ha-919901"
	I0812 10:36:56.957999   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:56.958187   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.960833   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.961211   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.961234   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.961442   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.961645   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.961800   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.961982   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.962129   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.962296   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.962309   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901 && echo "ha-919901" | sudo tee /etc/hostname
	I0812 10:36:57.083078   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:36:57.083102   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.086058   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.086459   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.086480   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.086649   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.086848   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.087030   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.087195   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.087403   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.087611   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.087635   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:36:57.205837   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:36:57.205865   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:36:57.205889   22139 buildroot.go:174] setting up certificates
	I0812 10:36:57.205902   22139 provision.go:84] configureAuth start
	I0812 10:36:57.205914   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:57.206217   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:57.209219   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.209615   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.209658   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.209816   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.212139   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.212538   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.212565   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.212696   22139 provision.go:143] copyHostCerts
	I0812 10:36:57.212729   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:36:57.212778   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:36:57.212790   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:36:57.212886   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:36:57.212980   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:36:57.213008   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:36:57.213018   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:36:57.213054   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:36:57.213111   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:36:57.213135   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:36:57.213144   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:36:57.213177   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:36:57.213242   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901 san=[127.0.0.1 192.168.39.5 ha-919901 localhost minikube]
	I0812 10:36:57.317181   22139 provision.go:177] copyRemoteCerts
	I0812 10:36:57.317234   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:36:57.317256   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.320500   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.320853   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.320905   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.321086   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.321283   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.321442   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.321590   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:57.407099   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:36:57.407176   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:36:57.430546   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:36:57.430627   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0812 10:36:57.454395   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:36:57.454483   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:36:57.477911   22139 provision.go:87] duration metric: took 271.996825ms to configureAuth
	I0812 10:36:57.477941   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:36:57.478147   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:36:57.478245   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.481239   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.481781   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.481804   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.482039   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.482240   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.482418   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.482564   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.482780   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.483016   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.483038   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:36:57.756403   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:36:57.756458   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:36:57.756468   22139 main.go:141] libmachine: (ha-919901) Calling .GetURL
	I0812 10:36:57.757779   22139 main.go:141] libmachine: (ha-919901) DBG | Using libvirt version 6000000
	I0812 10:36:57.761295   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.761720   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.761744   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.761945   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:36:57.761958   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:36:57.761977   22139 client.go:171] duration metric: took 21.411907085s to LocalClient.Create
	I0812 10:36:57.761998   22139 start.go:167] duration metric: took 21.411984441s to libmachine.API.Create "ha-919901"
	I0812 10:36:57.762007   22139 start.go:293] postStartSetup for "ha-919901" (driver="kvm2")
	I0812 10:36:57.762016   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:36:57.762028   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:57.762276   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:36:57.762306   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.764595   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.764993   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.765015   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.765146   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.765324   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.765498   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.765659   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:57.851838   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:36:57.856061   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:36:57.856086   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:36:57.856162   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:36:57.856300   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:36:57.856312   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:36:57.856417   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:36:57.865276   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:36:57.888801   22139 start.go:296] duration metric: took 126.783362ms for postStartSetup
	I0812 10:36:57.888852   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:57.889571   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:57.892981   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.893467   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.893504   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.893815   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:36:57.894011   22139 start.go:128] duration metric: took 21.563142297s to createHost
	I0812 10:36:57.894045   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.896579   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.897009   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.897034   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.897233   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.897463   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.897662   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.897864   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.898053   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.898219   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.898230   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:36:58.009563   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459017.984367599
	
	I0812 10:36:58.009592   22139 fix.go:216] guest clock: 1723459017.984367599
	I0812 10:36:58.009603   22139 fix.go:229] Guest: 2024-08-12 10:36:57.984367599 +0000 UTC Remote: 2024-08-12 10:36:57.89402311 +0000 UTC m=+21.678200750 (delta=90.344489ms)
	I0812 10:36:58.009630   22139 fix.go:200] guest clock delta is within tolerance: 90.344489ms
	I0812 10:36:58.009638   22139 start.go:83] releasing machines lock for "ha-919901", held for 21.678838542s
	I0812 10:36:58.009668   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.009964   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:58.013123   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.013592   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.013620   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.013757   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014381   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014581   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014672   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:36:58.014709   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:58.014810   22139 ssh_runner.go:195] Run: cat /version.json
	I0812 10:36:58.014830   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:58.017738   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.017947   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018233   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.018256   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018309   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.018329   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018463   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:58.018594   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:58.018678   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:58.018771   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:58.018790   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:58.018887   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:58.018945   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:58.019043   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:58.134918   22139 ssh_runner.go:195] Run: systemctl --version
	I0812 10:36:58.141016   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:36:58.306900   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:36:58.313419   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:36:58.313479   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:36:58.329408   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:36:58.329438   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:36:58.329504   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:36:58.348891   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:36:58.363551   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:36:58.363610   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:36:58.377888   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:36:58.391991   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:36:58.516125   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:36:58.678304   22139 docker.go:233] disabling docker service ...
	I0812 10:36:58.678383   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:36:58.692246   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:36:58.704725   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:36:58.816659   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:36:58.933414   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:36:58.947832   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:36:58.966113   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:36:58.966174   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.976967   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:36:58.977042   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.988239   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.999792   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.010341   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:36:59.022445   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.034253   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.052423   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.064051   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:36:59.073678   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:36:59.073744   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:36:59.087397   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:36:59.097682   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:36:59.210522   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:36:59.347232   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:36:59.347310   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:36:59.352076   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:36:59.352150   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:36:59.356036   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:36:59.393047   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:36:59.393122   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:36:59.421037   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:36:59.451603   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:36:59.452978   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:59.456259   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:59.456659   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:59.456681   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:59.457018   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:36:59.461511   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:36:59.473961   22139 kubeadm.go:883] updating cluster {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:36:59.474097   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:36:59.474155   22139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:36:59.506010   22139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 10:36:59.506074   22139 ssh_runner.go:195] Run: which lz4
	I0812 10:36:59.510208   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0812 10:36:59.510329   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 10:36:59.514484   22139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 10:36:59.514518   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 10:37:00.770263   22139 crio.go:462] duration metric: took 1.259980161s to copy over tarball
	I0812 10:37:00.770361   22139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 10:37:02.903214   22139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132827142s)
	I0812 10:37:02.903246   22139 crio.go:469] duration metric: took 2.132947707s to extract the tarball
	I0812 10:37:02.903255   22139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 10:37:02.940359   22139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:37:02.987236   22139 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:37:02.987259   22139 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:37:02.987267   22139 kubeadm.go:934] updating node { 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0812 10:37:02.987357   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:37:02.987431   22139 ssh_runner.go:195] Run: crio config
	I0812 10:37:03.030874   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:37:03.030898   22139 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 10:37:03.030908   22139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:37:03.030928   22139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-919901 NodeName:ha-919901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:37:03.031049   22139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-919901"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:37:03.031070   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:37:03.031114   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:37:03.048350   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:37:03.048469   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:37:03.048523   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:03.058393   22139 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:37:03.058467   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 10:37:03.067759   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0812 10:37:03.085108   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:37:03.101314   22139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0812 10:37:03.117869   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0812 10:37:03.134602   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:37:03.138466   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:37:03.150761   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:37:03.279305   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:37:03.296808   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.5
	I0812 10:37:03.296836   22139 certs.go:194] generating shared ca certs ...
	I0812 10:37:03.296857   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.297052   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:37:03.297122   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:37:03.297136   22139 certs.go:256] generating profile certs ...
	I0812 10:37:03.297202   22139 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:37:03.297221   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt with IP's: []
	I0812 10:37:03.435567   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt ...
	I0812 10:37:03.435593   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt: {Name:mkf76e1a58a19a83271906e0f2205d004df4fb05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.435765   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key ...
	I0812 10:37:03.435777   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key: {Name:mk683136baf4eed8ba89411e31352ad328795fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.435852   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e
	I0812 10:37:03.435867   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.254]
	I0812 10:37:03.610013   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e ...
	I0812 10:37:03.610042   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e: {Name:mk5995f26b966ef3bce995ce8597f3a2b6f2a70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.610208   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e ...
	I0812 10:37:03.610221   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e: {Name:mk1f9b400bd5620d6f41206bd125d9617c3b8ae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.610285   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:37:03.610374   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:37:03.610428   22139 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:37:03.610443   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt with IP's: []
	I0812 10:37:03.858769   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt ...
	I0812 10:37:03.858798   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt: {Name:mkd64192a1dbaf3f8110409ad2ff7466f51e63ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.858946   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key ...
	I0812 10:37:03.858964   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key: {Name:mk2850f76409b91e271b83360aab16a8d76d22e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.859054   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:37:03.859071   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:37:03.859081   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:37:03.859094   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:37:03.859107   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:37:03.859120   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:37:03.859133   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:37:03.859145   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:37:03.859193   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:37:03.859225   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:37:03.859234   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:37:03.859256   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:37:03.859277   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:37:03.859298   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:37:03.859334   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:03.859368   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:03.859381   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:37:03.859393   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:37:03.859961   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:37:03.885330   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:37:03.908980   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:37:03.932653   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:37:03.957691   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 10:37:03.980958   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:37:04.004552   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:37:04.028960   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:37:04.052754   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:37:04.079489   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:37:04.118871   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:37:04.151137   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:37:04.168140   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:37:04.174043   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:37:04.184674   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.189753   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.189813   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.196068   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:37:04.206640   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:37:04.217384   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.222219   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.222283   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.227981   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:37:04.238698   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:37:04.249626   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.254061   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.254128   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.259663   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:37:04.270902   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:37:04.275889   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:37:04.275949   22139 kubeadm.go:392] StartCluster: {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:37:04.276053   22139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:37:04.276130   22139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:37:04.318376   22139 cri.go:89] found id: ""
	I0812 10:37:04.318457   22139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 10:37:04.329217   22139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 10:37:04.339184   22139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 10:37:04.348640   22139 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 10:37:04.348661   22139 kubeadm.go:157] found existing configuration files:
	
	I0812 10:37:04.348703   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 10:37:04.357819   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 10:37:04.357887   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 10:37:04.368911   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 10:37:04.378409   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 10:37:04.378472   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 10:37:04.389662   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 10:37:04.400599   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 10:37:04.400672   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 10:37:04.412426   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 10:37:04.423193   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 10:37:04.423254   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 10:37:04.434581   22139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 10:37:04.556756   22139 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 10:37:04.556847   22139 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 10:37:04.679286   22139 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 10:37:04.679392   22139 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 10:37:04.679501   22139 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 10:37:04.883377   22139 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 10:37:05.013776   22139 out.go:204]   - Generating certificates and keys ...
	I0812 10:37:05.013892   22139 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 10:37:05.013999   22139 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 10:37:05.021650   22139 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 10:37:05.105693   22139 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 10:37:05.204662   22139 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 10:37:05.472479   22139 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 10:37:05.625833   22139 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 10:37:05.625971   22139 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-919901 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0812 10:37:05.895297   22139 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 10:37:05.895485   22139 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-919901 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0812 10:37:05.956929   22139 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 10:37:06.216059   22139 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 10:37:06.259832   22139 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 10:37:06.259922   22139 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 10:37:06.373511   22139 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 10:37:06.490156   22139 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 10:37:06.604171   22139 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 10:37:06.669583   22139 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 10:37:06.788499   22139 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 10:37:06.789058   22139 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 10:37:06.791857   22139 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 10:37:06.793956   22139 out.go:204]   - Booting up control plane ...
	I0812 10:37:06.794048   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 10:37:06.794129   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 10:37:06.794218   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 10:37:06.812534   22139 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 10:37:06.813476   22139 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 10:37:06.813534   22139 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 10:37:06.943625   22139 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 10:37:06.943703   22139 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 10:37:07.444184   22139 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.948018ms
	I0812 10:37:07.444267   22139 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 10:37:13.481049   22139 kubeadm.go:310] [api-check] The API server is healthy after 6.040247289s
	I0812 10:37:13.499700   22139 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 10:37:13.517044   22139 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 10:37:14.047469   22139 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 10:37:14.047716   22139 kubeadm.go:310] [mark-control-plane] Marking the node ha-919901 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 10:37:14.065443   22139 kubeadm.go:310] [bootstrap-token] Using token: ddr49h.zjklblvn621csm71
	I0812 10:37:14.067339   22139 out.go:204]   - Configuring RBAC rules ...
	I0812 10:37:14.067502   22139 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 10:37:14.073047   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 10:37:14.084914   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 10:37:14.088276   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 10:37:14.091458   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 10:37:14.095141   22139 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 10:37:14.114360   22139 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 10:37:14.374796   22139 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 10:37:14.890413   22139 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 10:37:14.891467   22139 kubeadm.go:310] 
	I0812 10:37:14.891542   22139 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 10:37:14.891564   22139 kubeadm.go:310] 
	I0812 10:37:14.891700   22139 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 10:37:14.891736   22139 kubeadm.go:310] 
	I0812 10:37:14.891797   22139 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 10:37:14.891874   22139 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 10:37:14.891948   22139 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 10:37:14.891957   22139 kubeadm.go:310] 
	I0812 10:37:14.892030   22139 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 10:37:14.892040   22139 kubeadm.go:310] 
	I0812 10:37:14.892134   22139 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 10:37:14.892152   22139 kubeadm.go:310] 
	I0812 10:37:14.892216   22139 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 10:37:14.892329   22139 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 10:37:14.892420   22139 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 10:37:14.892431   22139 kubeadm.go:310] 
	I0812 10:37:14.892550   22139 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 10:37:14.892651   22139 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 10:37:14.892665   22139 kubeadm.go:310] 
	I0812 10:37:14.892775   22139 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ddr49h.zjklblvn621csm71 \
	I0812 10:37:14.892950   22139 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 10:37:14.893004   22139 kubeadm.go:310] 	--control-plane 
	I0812 10:37:14.893015   22139 kubeadm.go:310] 
	I0812 10:37:14.893124   22139 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 10:37:14.893138   22139 kubeadm.go:310] 
	I0812 10:37:14.893235   22139 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ddr49h.zjklblvn621csm71 \
	I0812 10:37:14.893394   22139 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 10:37:14.893546   22139 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 10:37:14.893559   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:37:14.893565   22139 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 10:37:14.895475   22139 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 10:37:14.896683   22139 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0812 10:37:14.902178   22139 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 10:37:14.902201   22139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0812 10:37:14.925710   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 10:37:15.282032   22139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 10:37:15.282131   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:15.282152   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901 minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=true
	I0812 10:37:15.412386   22139 ops.go:34] apiserver oom_adj: -16
	I0812 10:37:15.412591   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:15.913317   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:16.412669   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:16.913172   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:17.413013   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:17.912853   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:18.413497   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:18.913669   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:19.412998   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:19.912734   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:20.413186   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:20.912731   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:21.413508   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:21.912784   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:22.412763   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:22.912882   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:23.413080   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:23.913390   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:24.412716   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:24.912693   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:25.413011   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:25.913281   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:26.413171   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:26.913156   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:27.413463   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:27.536903   22139 kubeadm.go:1113] duration metric: took 12.254848272s to wait for elevateKubeSystemPrivileges
	I0812 10:37:27.536936   22139 kubeadm.go:394] duration metric: took 23.260991872s to StartCluster
	I0812 10:37:27.536952   22139 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:27.537021   22139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:37:27.537714   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:27.537921   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 10:37:27.537956   22139 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 10:37:27.538027   22139 addons.go:69] Setting storage-provisioner=true in profile "ha-919901"
	I0812 10:37:27.537919   22139 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:37:27.538053   22139 addons.go:234] Setting addon storage-provisioner=true in "ha-919901"
	I0812 10:37:27.538056   22139 addons.go:69] Setting default-storageclass=true in profile "ha-919901"
	I0812 10:37:27.538059   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:37:27.538085   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:27.538092   22139 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-919901"
	I0812 10:37:27.538167   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:27.538571   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.538620   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.538697   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.538732   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.554125   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0812 10:37:27.554664   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.554705   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0812 10:37:27.555147   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.555289   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.555314   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.555721   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.555852   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.555882   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.556207   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.556355   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.556388   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.556395   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.558679   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:37:27.559028   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 10:37:27.559599   22139 cert_rotation.go:137] Starting client certificate rotation controller
	I0812 10:37:27.559825   22139 addons.go:234] Setting addon default-storageclass=true in "ha-919901"
	I0812 10:37:27.559875   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:27.560245   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.560292   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.573229   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0812 10:37:27.573754   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.574430   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.574461   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.574872   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.575124   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.577006   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:27.577086   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0812 10:37:27.577571   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.578060   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.578076   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.578355   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.578907   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.578941   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.579888   22139 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 10:37:27.581264   22139 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:37:27.581282   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 10:37:27.581303   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:27.584447   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.584821   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:27.584854   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.585047   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:27.585276   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:27.585492   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:27.585657   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:27.595948   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0812 10:37:27.596502   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.597079   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.597108   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.597438   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.597659   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.599596   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:27.599875   22139 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 10:37:27.599893   22139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 10:37:27.599911   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:27.602660   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.603052   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:27.603086   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.603293   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:27.603524   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:27.603704   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:27.603863   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:27.699239   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 10:37:27.766958   22139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:37:27.794871   22139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:37:28.120336   22139 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 10:37:28.446833   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.446863   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.446912   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.446934   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447181   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447207   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447250   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447269   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447282   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.447281   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447290   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447255   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.447336   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447217   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447497   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447522   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447589   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447602   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447608   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447617   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447745   22139 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0812 10:37:28.447755   22139 round_trippers.go:469] Request Headers:
	I0812 10:37:28.447768   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:37:28.447775   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:37:28.464847   22139 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0812 10:37:28.465671   22139 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0812 10:37:28.465690   22139 round_trippers.go:469] Request Headers:
	I0812 10:37:28.465701   22139 round_trippers.go:473]     Content-Type: application/json
	I0812 10:37:28.465706   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:37:28.465710   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:37:28.470773   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:37:28.470973   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.470990   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.471298   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.471318   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.474411   22139 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 10:37:28.476114   22139 addons.go:510] duration metric: took 938.14967ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 10:37:28.476160   22139 start.go:246] waiting for cluster config update ...
	I0812 10:37:28.476175   22139 start.go:255] writing updated cluster config ...
	I0812 10:37:28.478101   22139 out.go:177] 
	I0812 10:37:28.480226   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:28.480324   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:28.482014   22139 out.go:177] * Starting "ha-919901-m02" control-plane node in "ha-919901" cluster
	I0812 10:37:28.483796   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:37:28.483826   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:37:28.483927   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:37:28.483941   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:37:28.484038   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:28.484245   22139 start.go:360] acquireMachinesLock for ha-919901-m02: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:37:28.484302   22139 start.go:364] duration metric: took 34.303µs to acquireMachinesLock for "ha-919901-m02"
	I0812 10:37:28.484323   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:37:28.484418   22139 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0812 10:37:28.486110   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:37:28.486219   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:28.486252   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:28.502135   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0812 10:37:28.502628   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:28.503153   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:28.503182   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:28.503527   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:28.503746   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:28.503940   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:28.504112   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:37:28.504140   22139 client.go:168] LocalClient.Create starting
	I0812 10:37:28.504181   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:37:28.504231   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:37:28.504247   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:37:28.504322   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:37:28.504346   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:37:28.504358   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:37:28.504378   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:37:28.504389   22139 main.go:141] libmachine: (ha-919901-m02) Calling .PreCreateCheck
	I0812 10:37:28.504581   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:28.505092   22139 main.go:141] libmachine: Creating machine...
	I0812 10:37:28.505108   22139 main.go:141] libmachine: (ha-919901-m02) Calling .Create
	I0812 10:37:28.505273   22139 main.go:141] libmachine: (ha-919901-m02) Creating KVM machine...
	I0812 10:37:28.506878   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found existing default KVM network
	I0812 10:37:28.507019   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found existing private KVM network mk-ha-919901
	I0812 10:37:28.507170   22139 main.go:141] libmachine: (ha-919901-m02) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 ...
	I0812 10:37:28.507196   22139 main.go:141] libmachine: (ha-919901-m02) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:37:28.507246   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.507159   22539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:37:28.507385   22139 main.go:141] libmachine: (ha-919901-m02) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:37:28.781097   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.780972   22539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa...
	I0812 10:37:28.910232   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.910067   22539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/ha-919901-m02.rawdisk...
	I0812 10:37:28.910270   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Writing magic tar header
	I0812 10:37:28.910285   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Writing SSH key tar header
	I0812 10:37:28.910296   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.910186   22539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 ...
	I0812 10:37:28.910312   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02
	I0812 10:37:28.910331   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:37:28.910351   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:37:28.910368   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 (perms=drwx------)
	I0812 10:37:28.910381   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:37:28.910398   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:37:28.910410   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:37:28.910425   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:37:28.910439   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:37:28.910462   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home
	I0812 10:37:28.910478   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Skipping /home - not owner
	I0812 10:37:28.910490   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:37:28.910508   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:37:28.910523   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:37:28.910536   22139 main.go:141] libmachine: (ha-919901-m02) Creating domain...
	I0812 10:37:28.911452   22139 main.go:141] libmachine: (ha-919901-m02) define libvirt domain using xml: 
	I0812 10:37:28.911483   22139 main.go:141] libmachine: (ha-919901-m02) <domain type='kvm'>
	I0812 10:37:28.911495   22139 main.go:141] libmachine: (ha-919901-m02)   <name>ha-919901-m02</name>
	I0812 10:37:28.911506   22139 main.go:141] libmachine: (ha-919901-m02)   <memory unit='MiB'>2200</memory>
	I0812 10:37:28.911543   22139 main.go:141] libmachine: (ha-919901-m02)   <vcpu>2</vcpu>
	I0812 10:37:28.911565   22139 main.go:141] libmachine: (ha-919901-m02)   <features>
	I0812 10:37:28.911576   22139 main.go:141] libmachine: (ha-919901-m02)     <acpi/>
	I0812 10:37:28.911587   22139 main.go:141] libmachine: (ha-919901-m02)     <apic/>
	I0812 10:37:28.911600   22139 main.go:141] libmachine: (ha-919901-m02)     <pae/>
	I0812 10:37:28.911607   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.911617   22139 main.go:141] libmachine: (ha-919901-m02)   </features>
	I0812 10:37:28.911629   22139 main.go:141] libmachine: (ha-919901-m02)   <cpu mode='host-passthrough'>
	I0812 10:37:28.911640   22139 main.go:141] libmachine: (ha-919901-m02)   
	I0812 10:37:28.911648   22139 main.go:141] libmachine: (ha-919901-m02)   </cpu>
	I0812 10:37:28.911660   22139 main.go:141] libmachine: (ha-919901-m02)   <os>
	I0812 10:37:28.911671   22139 main.go:141] libmachine: (ha-919901-m02)     <type>hvm</type>
	I0812 10:37:28.911686   22139 main.go:141] libmachine: (ha-919901-m02)     <boot dev='cdrom'/>
	I0812 10:37:28.911697   22139 main.go:141] libmachine: (ha-919901-m02)     <boot dev='hd'/>
	I0812 10:37:28.911707   22139 main.go:141] libmachine: (ha-919901-m02)     <bootmenu enable='no'/>
	I0812 10:37:28.911718   22139 main.go:141] libmachine: (ha-919901-m02)   </os>
	I0812 10:37:28.911728   22139 main.go:141] libmachine: (ha-919901-m02)   <devices>
	I0812 10:37:28.911739   22139 main.go:141] libmachine: (ha-919901-m02)     <disk type='file' device='cdrom'>
	I0812 10:37:28.911760   22139 main.go:141] libmachine: (ha-919901-m02)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/boot2docker.iso'/>
	I0812 10:37:28.911777   22139 main.go:141] libmachine: (ha-919901-m02)       <target dev='hdc' bus='scsi'/>
	I0812 10:37:28.911787   22139 main.go:141] libmachine: (ha-919901-m02)       <readonly/>
	I0812 10:37:28.911798   22139 main.go:141] libmachine: (ha-919901-m02)     </disk>
	I0812 10:37:28.911811   22139 main.go:141] libmachine: (ha-919901-m02)     <disk type='file' device='disk'>
	I0812 10:37:28.911824   22139 main.go:141] libmachine: (ha-919901-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:37:28.911840   22139 main.go:141] libmachine: (ha-919901-m02)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/ha-919901-m02.rawdisk'/>
	I0812 10:37:28.911856   22139 main.go:141] libmachine: (ha-919901-m02)       <target dev='hda' bus='virtio'/>
	I0812 10:37:28.911868   22139 main.go:141] libmachine: (ha-919901-m02)     </disk>
	I0812 10:37:28.911877   22139 main.go:141] libmachine: (ha-919901-m02)     <interface type='network'>
	I0812 10:37:28.911894   22139 main.go:141] libmachine: (ha-919901-m02)       <source network='mk-ha-919901'/>
	I0812 10:37:28.911905   22139 main.go:141] libmachine: (ha-919901-m02)       <model type='virtio'/>
	I0812 10:37:28.911925   22139 main.go:141] libmachine: (ha-919901-m02)     </interface>
	I0812 10:37:28.911940   22139 main.go:141] libmachine: (ha-919901-m02)     <interface type='network'>
	I0812 10:37:28.911951   22139 main.go:141] libmachine: (ha-919901-m02)       <source network='default'/>
	I0812 10:37:28.911962   22139 main.go:141] libmachine: (ha-919901-m02)       <model type='virtio'/>
	I0812 10:37:28.911974   22139 main.go:141] libmachine: (ha-919901-m02)     </interface>
	I0812 10:37:28.911985   22139 main.go:141] libmachine: (ha-919901-m02)     <serial type='pty'>
	I0812 10:37:28.911997   22139 main.go:141] libmachine: (ha-919901-m02)       <target port='0'/>
	I0812 10:37:28.912011   22139 main.go:141] libmachine: (ha-919901-m02)     </serial>
	I0812 10:37:28.912024   22139 main.go:141] libmachine: (ha-919901-m02)     <console type='pty'>
	I0812 10:37:28.912036   22139 main.go:141] libmachine: (ha-919901-m02)       <target type='serial' port='0'/>
	I0812 10:37:28.912048   22139 main.go:141] libmachine: (ha-919901-m02)     </console>
	I0812 10:37:28.912059   22139 main.go:141] libmachine: (ha-919901-m02)     <rng model='virtio'>
	I0812 10:37:28.912071   22139 main.go:141] libmachine: (ha-919901-m02)       <backend model='random'>/dev/random</backend>
	I0812 10:37:28.912085   22139 main.go:141] libmachine: (ha-919901-m02)     </rng>
	I0812 10:37:28.912097   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.912103   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.912113   22139 main.go:141] libmachine: (ha-919901-m02)   </devices>
	I0812 10:37:28.912122   22139 main.go:141] libmachine: (ha-919901-m02) </domain>
	I0812 10:37:28.912134   22139 main.go:141] libmachine: (ha-919901-m02) 
	I0812 10:37:28.919566   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:8c:1d:03 in network default
	I0812 10:37:28.920179   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring networks are active...
	I0812 10:37:28.920198   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:28.920934   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring network default is active
	I0812 10:37:28.921183   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring network mk-ha-919901 is active
	I0812 10:37:28.921528   22139 main.go:141] libmachine: (ha-919901-m02) Getting domain xml...
	I0812 10:37:28.922191   22139 main.go:141] libmachine: (ha-919901-m02) Creating domain...
	I0812 10:37:30.153706   22139 main.go:141] libmachine: (ha-919901-m02) Waiting to get IP...
	I0812 10:37:30.154606   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.154983   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.155023   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.154974   22539 retry.go:31] will retry after 288.98178ms: waiting for machine to come up
	I0812 10:37:30.445696   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.446231   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.446256   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.446189   22539 retry.go:31] will retry after 236.090765ms: waiting for machine to come up
	I0812 10:37:30.683850   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.684299   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.684325   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.684259   22539 retry.go:31] will retry after 430.221058ms: waiting for machine to come up
	I0812 10:37:31.115951   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:31.116471   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:31.116494   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:31.116403   22539 retry.go:31] will retry after 416.1691ms: waiting for machine to come up
	I0812 10:37:31.533738   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:31.534279   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:31.534308   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:31.534240   22539 retry.go:31] will retry after 697.888434ms: waiting for machine to come up
	I0812 10:37:32.235212   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:32.236071   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:32.236102   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:32.236024   22539 retry.go:31] will retry after 840.769999ms: waiting for machine to come up
	I0812 10:37:33.078146   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:33.078614   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:33.078637   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:33.078574   22539 retry.go:31] will retry after 933.572158ms: waiting for machine to come up
	I0812 10:37:34.014056   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:34.014359   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:34.014381   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:34.014321   22539 retry.go:31] will retry after 1.271180368s: waiting for machine to come up
	I0812 10:37:35.287618   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:35.288006   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:35.288028   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:35.287966   22539 retry.go:31] will retry after 1.697317183s: waiting for machine to come up
	I0812 10:37:36.986948   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:36.987355   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:36.987427   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:36.987314   22539 retry.go:31] will retry after 2.104575739s: waiting for machine to come up
	I0812 10:37:39.093432   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:39.093883   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:39.093911   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:39.093839   22539 retry.go:31] will retry after 2.180330285s: waiting for machine to come up
	I0812 10:37:41.277251   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:41.277754   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:41.277782   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:41.277682   22539 retry.go:31] will retry after 3.39047776s: waiting for machine to come up
	I0812 10:37:44.670256   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:44.670796   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:44.670824   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:44.670757   22539 retry.go:31] will retry after 4.366154175s: waiting for machine to come up
	I0812 10:37:49.038704   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.039253   22139 main.go:141] libmachine: (ha-919901-m02) Found IP for machine: 192.168.39.139
	I0812 10:37:49.039288   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has current primary IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.039298   22139 main.go:141] libmachine: (ha-919901-m02) Reserving static IP address...
	I0812 10:37:49.039779   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find host DHCP lease matching {name: "ha-919901-m02", mac: "52:54:00:aa:34:35", ip: "192.168.39.139"} in network mk-ha-919901
	I0812 10:37:49.117017   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Getting to WaitForSSH function...
	I0812 10:37:49.117049   22139 main.go:141] libmachine: (ha-919901-m02) Reserved static IP address: 192.168.39.139
	I0812 10:37:49.117063   22139 main.go:141] libmachine: (ha-919901-m02) Waiting for SSH to be available...
	I0812 10:37:49.119789   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.120270   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.120297   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.120506   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using SSH client type: external
	I0812 10:37:49.120535   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa (-rw-------)
	I0812 10:37:49.120567   22139 main.go:141] libmachine: (ha-919901-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:37:49.120604   22139 main.go:141] libmachine: (ha-919901-m02) DBG | About to run SSH command:
	I0812 10:37:49.120621   22139 main.go:141] libmachine: (ha-919901-m02) DBG | exit 0
	I0812 10:37:49.240732   22139 main.go:141] libmachine: (ha-919901-m02) DBG | SSH cmd err, output: <nil>: 
	I0812 10:37:49.241012   22139 main.go:141] libmachine: (ha-919901-m02) KVM machine creation complete!
	I0812 10:37:49.241324   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:49.241891   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:49.242080   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:49.242197   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:37:49.242214   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:37:49.243430   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:37:49.243449   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:37:49.243454   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:37:49.243460   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.245554   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.245945   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.245989   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.245995   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.246157   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.246323   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.246463   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.246611   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.246800   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.246817   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:37:49.340024   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:37:49.340067   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:37:49.340078   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.342907   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.343316   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.343340   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.343612   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.343843   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.344017   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.344151   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.344282   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.344438   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.344450   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:37:49.445619   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:37:49.445723   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:37:49.445741   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:37:49.445751   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.445990   22139 buildroot.go:166] provisioning hostname "ha-919901-m02"
	I0812 10:37:49.446016   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.446197   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.449003   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.449464   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.449486   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.449707   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.449925   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.450085   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.450223   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.450395   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.450550   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.450563   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901-m02 && echo "ha-919901-m02" | sudo tee /etc/hostname
	I0812 10:37:49.568615   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901-m02
	
	I0812 10:37:49.568637   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.571358   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.571725   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.571756   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.571931   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.572123   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.572308   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.572450   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.572601   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.572771   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.572792   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:37:49.678025   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:37:49.678058   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:37:49.678077   22139 buildroot.go:174] setting up certificates
	I0812 10:37:49.678086   22139 provision.go:84] configureAuth start
	I0812 10:37:49.678097   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.678391   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:49.681793   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.682166   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.682197   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.682378   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.684949   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.685438   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.685462   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.685710   22139 provision.go:143] copyHostCerts
	I0812 10:37:49.685747   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:37:49.685779   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:37:49.685788   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:37:49.685851   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:37:49.685958   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:37:49.685987   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:37:49.685993   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:37:49.686033   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:37:49.686112   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:37:49.686150   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:37:49.686158   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:37:49.686194   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:37:49.686333   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901-m02 san=[127.0.0.1 192.168.39.139 ha-919901-m02 localhost minikube]
	I0812 10:37:49.869783   22139 provision.go:177] copyRemoteCerts
	I0812 10:37:49.869853   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:37:49.869882   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.872784   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.873171   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.873206   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.873428   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.873641   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.873842   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.873998   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:49.951239   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:37:49.951308   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:37:49.974833   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:37:49.974900   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:37:49.999209   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:37:49.999298   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:37:50.023782   22139 provision.go:87] duration metric: took 345.685308ms to configureAuth
	I0812 10:37:50.023811   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:37:50.024049   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:50.024145   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.026812   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.027203   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.027236   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.027385   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.027601   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.027802   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.027923   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.028141   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:50.028385   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:50.028411   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:37:50.281325   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:37:50.281357   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:37:50.281368   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetURL
	I0812 10:37:50.282640   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using libvirt version 6000000
	I0812 10:37:50.285281   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.285705   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.285735   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.285873   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:37:50.285889   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:37:50.285895   22139 client.go:171] duration metric: took 21.781744157s to LocalClient.Create
	I0812 10:37:50.285917   22139 start.go:167] duration metric: took 21.781823399s to libmachine.API.Create "ha-919901"
	I0812 10:37:50.285925   22139 start.go:293] postStartSetup for "ha-919901-m02" (driver="kvm2")
	I0812 10:37:50.285935   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:37:50.285962   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.286214   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:37:50.286236   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.288506   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.288886   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.288914   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.289069   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.289245   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.289441   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.289580   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.366975   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:37:50.370963   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:37:50.370989   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:37:50.371057   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:37:50.371159   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:37:50.371173   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:37:50.371282   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:37:50.381168   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:50.405187   22139 start.go:296] duration metric: took 119.249935ms for postStartSetup
	I0812 10:37:50.405244   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:50.405847   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:50.408849   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.409229   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.409251   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.409509   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:50.409710   22139 start.go:128] duration metric: took 21.925281715s to createHost
	I0812 10:37:50.409733   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.411955   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.412255   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.412285   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.412412   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.412629   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.412777   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.412922   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.413104   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:50.413271   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:50.413282   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:37:50.509662   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459070.484863706
	
	I0812 10:37:50.509685   22139 fix.go:216] guest clock: 1723459070.484863706
	I0812 10:37:50.509693   22139 fix.go:229] Guest: 2024-08-12 10:37:50.484863706 +0000 UTC Remote: 2024-08-12 10:37:50.409722022 +0000 UTC m=+74.193899662 (delta=75.141684ms)
	I0812 10:37:50.509708   22139 fix.go:200] guest clock delta is within tolerance: 75.141684ms
	I0812 10:37:50.509713   22139 start.go:83] releasing machines lock for "ha-919901-m02", held for 22.02540096s
	I0812 10:37:50.509731   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.510014   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:50.512753   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.513153   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.513179   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.515749   22139 out.go:177] * Found network options:
	I0812 10:37:50.517211   22139 out.go:177]   - NO_PROXY=192.168.39.5
	W0812 10:37:50.518655   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:37:50.518689   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519289   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519560   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519586   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:37:50.519625   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	W0812 10:37:50.519837   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:37:50.519910   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:37:50.519928   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.522516   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.522799   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.522936   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.522961   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.523088   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.523175   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.523199   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.523239   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.523418   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.523420   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.523595   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.523607   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.523723   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.523872   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.755016   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:37:50.760527   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:37:50.760593   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:37:50.776992   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:37:50.777014   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:37:50.777083   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:37:50.795454   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:37:50.809504   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:37:50.809570   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:37:50.823556   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:37:50.837623   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:37:50.959183   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:37:51.110686   22139 docker.go:233] disabling docker service ...
	I0812 10:37:51.110759   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:37:51.124966   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:37:51.137913   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:37:51.279757   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:37:51.412131   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:37:51.427898   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:37:51.447921   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:37:51.447980   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.459496   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:37:51.459550   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.471100   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.482858   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.494998   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:37:51.506745   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.518790   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.535691   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.546757   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:37:51.556586   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:37:51.556654   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:37:51.569752   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:37:51.580547   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:37:51.693279   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:37:51.832904   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:37:51.832980   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:37:51.837394   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:37:51.837457   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:37:51.841299   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:37:51.880357   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:37:51.880424   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:37:51.910678   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:37:51.941770   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:37:51.943452   22139 out.go:177]   - env NO_PROXY=192.168.39.5
	I0812 10:37:51.944794   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:51.947576   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:51.947933   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:51.947969   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:51.948192   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:37:51.952212   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:37:51.964039   22139 mustload.go:65] Loading cluster: ha-919901
	I0812 10:37:51.964238   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:51.964513   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:51.964538   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:51.979245   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0812 10:37:51.979712   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:51.980167   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:51.980190   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:51.980466   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:51.980643   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:51.982290   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:51.982690   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:51.982722   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:51.997855   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0812 10:37:51.998260   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:51.998861   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:51.998881   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:51.999213   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:51.999399   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:51.999584   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.139
	I0812 10:37:51.999595   22139 certs.go:194] generating shared ca certs ...
	I0812 10:37:51.999612   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:51.999729   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:37:51.999769   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:37:51.999781   22139 certs.go:256] generating profile certs ...
	I0812 10:37:51.999865   22139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:37:51.999888   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f
	I0812 10:37:51.999902   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.254]
	I0812 10:37:52.103250   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f ...
	I0812 10:37:52.103277   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f: {Name:mke462d4f0c27362085929f70613afd49818b647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:52.103437   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f ...
	I0812 10:37:52.103449   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f: {Name:mk18c46c24dd2af2af961266b2e619e3af1f3a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:52.103513   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:37:52.103662   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:37:52.103798   22139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:37:52.103816   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:37:52.103831   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:37:52.103843   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:37:52.103855   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:37:52.103865   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:37:52.103877   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:37:52.103888   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:37:52.103902   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:37:52.103949   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:37:52.103979   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:37:52.103989   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:37:52.104013   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:37:52.104035   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:37:52.104059   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:37:52.104100   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:52.104125   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.104139   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.104151   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.104178   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:52.107325   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:52.107794   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:52.107832   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:52.107982   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:52.108158   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:52.108270   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:52.108367   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:52.181362   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 10:37:52.185983   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 10:37:52.196883   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 10:37:52.201493   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0812 10:37:52.212532   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 10:37:52.217010   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 10:37:52.228117   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 10:37:52.232146   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 10:37:52.243051   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 10:37:52.247288   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 10:37:52.257695   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 10:37:52.262001   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 10:37:52.273500   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:37:52.300730   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:37:52.324084   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:37:52.348251   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:37:52.371770   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 10:37:52.395047   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:37:52.417533   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:37:52.440439   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:37:52.463551   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:37:52.490025   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:37:52.514468   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:37:52.538852   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 10:37:52.556272   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0812 10:37:52.572785   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 10:37:52.589420   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 10:37:52.605303   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 10:37:52.622281   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 10:37:52.638097   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 10:37:52.654849   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:37:52.660503   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:37:52.670964   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.675215   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.675267   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.680886   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:37:52.691217   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:37:52.702203   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.706268   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.706329   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.711765   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:37:52.722023   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:37:52.732135   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.736291   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.736353   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.741886   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:37:52.752267   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:37:52.756072   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:37:52.756123   22139 kubeadm.go:934] updating node {m02 192.168.39.139 8443 v1.30.3 crio true true} ...
	I0812 10:37:52.756200   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:37:52.756225   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:37:52.756258   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:37:52.772983   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:37:52.773043   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:37:52.773091   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:52.782547   22139 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 10:37:52.782618   22139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:52.792186   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 10:37:52.792219   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:37:52.792246   22139 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0812 10:37:52.792287   22139 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0812 10:37:52.792299   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:37:52.797070   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 10:37:52.797105   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 10:37:56.387092   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:37:56.387185   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:37:56.391994   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 10:37:56.392032   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 10:38:07.410733   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:38:07.426761   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:38:07.426856   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:38:07.431668   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 10:38:07.431707   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 10:38:07.816979   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 10:38:07.826989   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:38:07.843396   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:38:07.860325   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:38:07.876513   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:38:07.880379   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:38:07.892322   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:38:08.015488   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:38:08.033052   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:38:08.033474   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:38:08.033513   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:38:08.048583   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0812 10:38:08.049094   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:38:08.049629   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:38:08.049652   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:38:08.049967   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:38:08.050179   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:38:08.050319   22139 start.go:317] joinCluster: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:38:08.050436   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 10:38:08.050458   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:38:08.053750   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:38:08.054113   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:38:08.054157   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:38:08.054311   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:38:08.054516   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:38:08.054670   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:38:08.054843   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:38:08.210441   22139 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:08.210483   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3km7df.rl0mno282pd477ol --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m02 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443"
	I0812 10:38:31.080328   22139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3km7df.rl0mno282pd477ol --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m02 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443": (22.869804459s)
	I0812 10:38:31.080363   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 10:38:31.624619   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901-m02 minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=false
	I0812 10:38:31.746083   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-919901-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 10:38:31.905406   22139 start.go:319] duration metric: took 23.85508197s to joinCluster
	I0812 10:38:31.905474   22139 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:31.905822   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:38:31.907125   22139 out.go:177] * Verifying Kubernetes components...
	I0812 10:38:31.908554   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:38:32.179187   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:38:32.225563   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:38:32.225828   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 10:38:32.225893   22139 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0812 10:38:32.226113   22139 node_ready.go:35] waiting up to 6m0s for node "ha-919901-m02" to be "Ready" ...
	I0812 10:38:32.226206   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:32.226220   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:32.226231   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:32.226243   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:32.241504   22139 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0812 10:38:32.726307   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:32.726335   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:32.726346   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:32.726352   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:32.732174   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:33.226656   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:33.226678   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:33.226691   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:33.226695   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:33.232112   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:33.726785   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:33.726809   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:33.726818   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:33.726823   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:33.730913   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:34.227085   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:34.227111   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:34.227120   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:34.227126   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:34.230851   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:34.231447   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:34.726991   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:34.727018   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:34.727030   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:34.727038   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:34.730671   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:35.226808   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:35.226828   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:35.226835   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:35.226839   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:35.231030   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:35.726413   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:35.726449   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:35.726457   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:35.726462   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:35.730777   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:36.226375   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:36.226400   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:36.226418   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:36.226424   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:36.230030   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:36.726881   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:36.726908   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:36.726918   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:36.726924   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:36.730189   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:36.730849   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:37.227210   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:37.227233   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:37.227240   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:37.227244   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:37.230741   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:37.726942   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:37.726966   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:37.726976   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:37.726981   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:37.738632   22139 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0812 10:38:38.227025   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:38.227046   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:38.227054   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:38.227058   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:38.230254   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:38.726657   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:38.726697   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:38.726709   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:38.726714   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:38.732549   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:38.733426   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:39.226871   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:39.226890   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:39.226898   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:39.226903   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:39.229835   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:39.726495   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:39.726518   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:39.726526   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:39.726530   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:39.729687   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:40.226620   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:40.226646   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:40.226656   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:40.226662   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:40.229679   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:40.726552   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:40.726575   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:40.726583   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:40.726588   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:40.729769   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:41.226663   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:41.226690   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:41.226702   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:41.226707   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:41.236833   22139 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0812 10:38:41.237603   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:41.726563   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:41.726589   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:41.726601   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:41.726608   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:41.729990   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:42.227153   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:42.227182   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:42.227193   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:42.227198   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:42.230829   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:42.726660   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:42.726688   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:42.726696   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:42.726699   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:42.730345   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.226482   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:43.226507   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:43.226517   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:43.226523   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:43.229844   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.727251   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:43.727274   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:43.727282   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:43.727286   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:43.731023   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.731816   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:44.227276   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:44.227305   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:44.227316   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:44.227323   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:44.230564   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:44.726581   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:44.726612   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:44.726623   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:44.726628   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:44.729746   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:45.226791   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:45.226819   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:45.226827   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:45.226833   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:45.230032   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:45.727207   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:45.727231   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:45.727239   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:45.727243   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:45.730471   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:46.226474   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:46.226503   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:46.226512   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:46.226516   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:46.229727   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:46.230324   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:46.726377   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:46.726401   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:46.726408   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:46.726413   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:46.729512   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:47.226695   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:47.226724   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:47.226734   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:47.226738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:47.230484   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:47.726798   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:47.726829   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:47.726838   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:47.726841   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:47.730707   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:48.227105   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:48.227128   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:48.227136   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:48.227141   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:48.230801   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:48.231561   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:48.726420   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:48.726445   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:48.726455   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:48.726461   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:48.730193   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.226336   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.226360   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.226368   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.226372   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.229915   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.726978   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.727001   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.727010   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.727014   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.730144   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.730713   22139 node_ready.go:49] node "ha-919901-m02" has status "Ready":"True"
	I0812 10:38:49.730731   22139 node_ready.go:38] duration metric: took 17.50460046s for node "ha-919901-m02" to be "Ready" ...
	I0812 10:38:49.730739   22139 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:38:49.730797   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:49.730804   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.730812   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.730822   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.735736   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:49.741879   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.741983   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rc7cl
	I0812 10:38:49.741994   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.742005   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.742013   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.745764   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.746718   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.746735   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.746748   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.746753   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.749207   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.749644   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.749660   22139 pod_ready.go:81] duration metric: took 7.755653ms for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.749670   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.749718   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wstd4
	I0812 10:38:49.749725   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.749732   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.749738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.752354   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.753169   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.753187   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.753197   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.753200   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.756221   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.756972   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.756989   22139 pod_ready.go:81] duration metric: took 7.312835ms for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.756998   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.757054   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901
	I0812 10:38:49.757063   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.757070   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.757074   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.759711   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.760409   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.760421   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.760428   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.760431   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.763367   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.763803   22139 pod_ready.go:92] pod "etcd-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.763821   22139 pod_ready.go:81] duration metric: took 6.817376ms for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.763831   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.763903   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m02
	I0812 10:38:49.763913   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.763919   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.763922   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.766801   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.767604   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.767620   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.767636   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.767640   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.770437   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.770792   22139 pod_ready.go:92] pod "etcd-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.770808   22139 pod_ready.go:81] duration metric: took 6.970572ms for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.770821   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.927159   22139 request.go:629] Waited for 156.277068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:38:49.927255   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:38:49.927267   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.927278   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.927289   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.930631   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.127617   22139 request.go:629] Waited for 196.417628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.127710   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.127719   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.127728   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.127734   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.131094   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.131662   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.131684   22139 pod_ready.go:81] duration metric: took 360.85671ms for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.131693   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.327667   22139 request.go:629] Waited for 195.895295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:38:50.327727   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:38:50.327732   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.327739   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.327744   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.330866   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.527854   22139 request.go:629] Waited for 196.367698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:50.527919   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:50.527947   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.527958   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.527966   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.532132   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:50.533005   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.533025   22139 pod_ready.go:81] duration metric: took 401.325416ms for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.533034   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.727037   22139 request.go:629] Waited for 193.930717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:38:50.727094   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:38:50.727099   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.727109   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.727115   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.730807   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.927730   22139 request.go:629] Waited for 196.334188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.927804   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.927810   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.927817   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.927820   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.931132   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.931685   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.931707   22139 pod_ready.go:81] duration metric: took 398.666953ms for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.931716   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.127764   22139 request.go:629] Waited for 195.969056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:38:51.127829   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:38:51.127836   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.127847   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.127855   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.131164   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.326963   22139 request.go:629] Waited for 195.080527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.327036   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.327042   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.327050   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.327054   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.331212   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:51.331666   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:51.331686   22139 pod_ready.go:81] duration metric: took 399.963516ms for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.331696   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.527131   22139 request.go:629] Waited for 195.356334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:38:51.527194   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:38:51.527202   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.527213   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.527221   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.530551   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.727563   22139 request.go:629] Waited for 196.347965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.727635   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.727641   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.727648   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.727652   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.730969   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.731393   22139 pod_ready.go:92] pod "kube-proxy-cczfj" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:51.731411   22139 pod_ready.go:81] duration metric: took 399.709277ms for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.731420   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.927584   22139 request.go:629] Waited for 196.106818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:38:51.927654   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:38:51.927661   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.927671   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.927675   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.931432   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.127492   22139 request.go:629] Waited for 195.483215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.127565   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.127572   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.127582   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.127591   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.131126   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.131914   22139 pod_ready.go:92] pod "kube-proxy-ftvfl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.131934   22139 pod_ready.go:81] duration metric: took 400.509323ms for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.131943   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.328036   22139 request.go:629] Waited for 196.023184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:38:52.328118   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:38:52.328126   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.328136   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.328143   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.331516   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.527368   22139 request.go:629] Waited for 195.356406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.527442   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.527447   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.527454   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.527458   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.531233   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.531867   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.531886   22139 pod_ready.go:81] duration metric: took 399.936973ms for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.531897   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.727059   22139 request.go:629] Waited for 195.088541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:38:52.727166   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:38:52.727178   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.727189   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.727201   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.731062   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.928053   22139 request.go:629] Waited for 196.421191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:52.928132   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:52.928140   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.928151   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.928156   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.931935   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.932683   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.932704   22139 pod_ready.go:81] duration metric: took 400.799965ms for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.932715   22139 pod_ready.go:38] duration metric: took 3.20196498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:38:52.932730   22139 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:38:52.932788   22139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:38:52.948888   22139 api_server.go:72] duration metric: took 21.043379284s to wait for apiserver process to appear ...
	I0812 10:38:52.948914   22139 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:38:52.948932   22139 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0812 10:38:52.953103   22139 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0812 10:38:52.953162   22139 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0812 10:38:52.953167   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.953175   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.953184   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.954149   22139 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0812 10:38:52.954246   22139 api_server.go:141] control plane version: v1.30.3
	I0812 10:38:52.954261   22139 api_server.go:131] duration metric: took 5.341963ms to wait for apiserver health ...
	I0812 10:38:52.954269   22139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:38:53.127956   22139 request.go:629] Waited for 173.629365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.128015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.128021   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.128031   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.128037   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.133390   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:53.137526   22139 system_pods.go:59] 17 kube-system pods found
	I0812 10:38:53.137564   22139 system_pods.go:61] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:38:53.137569   22139 system_pods.go:61] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:38:53.137573   22139 system_pods.go:61] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:38:53.137577   22139 system_pods.go:61] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:38:53.137580   22139 system_pods.go:61] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:38:53.137583   22139 system_pods.go:61] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:38:53.137587   22139 system_pods.go:61] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:38:53.137590   22139 system_pods.go:61] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:38:53.137593   22139 system_pods.go:61] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:38:53.137596   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:38:53.137599   22139 system_pods.go:61] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:38:53.137602   22139 system_pods.go:61] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:38:53.137605   22139 system_pods.go:61] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:38:53.137608   22139 system_pods.go:61] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:38:53.137611   22139 system_pods.go:61] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:38:53.137615   22139 system_pods.go:61] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:38:53.137622   22139 system_pods.go:61] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:38:53.137630   22139 system_pods.go:74] duration metric: took 183.354956ms to wait for pod list to return data ...
	I0812 10:38:53.137644   22139 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:38:53.327062   22139 request.go:629] Waited for 189.323961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:38:53.327126   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:38:53.327133   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.327144   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.327148   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.331496   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:53.331781   22139 default_sa.go:45] found service account: "default"
	I0812 10:38:53.331805   22139 default_sa.go:55] duration metric: took 194.152257ms for default service account to be created ...
	I0812 10:38:53.331816   22139 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:38:53.527422   22139 request.go:629] Waited for 195.539325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.527490   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.527495   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.527502   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.527506   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.533723   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:38:53.537850   22139 system_pods.go:86] 17 kube-system pods found
	I0812 10:38:53.537879   22139 system_pods.go:89] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:38:53.537884   22139 system_pods.go:89] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:38:53.537893   22139 system_pods.go:89] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:38:53.537897   22139 system_pods.go:89] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:38:53.537901   22139 system_pods.go:89] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:38:53.537905   22139 system_pods.go:89] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:38:53.537909   22139 system_pods.go:89] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:38:53.537913   22139 system_pods.go:89] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:38:53.537917   22139 system_pods.go:89] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:38:53.537921   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:38:53.537926   22139 system_pods.go:89] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:38:53.537935   22139 system_pods.go:89] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:38:53.537941   22139 system_pods.go:89] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:38:53.537947   22139 system_pods.go:89] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:38:53.537955   22139 system_pods.go:89] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:38:53.537962   22139 system_pods.go:89] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:38:53.537971   22139 system_pods.go:89] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:38:53.537978   22139 system_pods.go:126] duration metric: took 206.157149ms to wait for k8s-apps to be running ...
	I0812 10:38:53.537987   22139 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:38:53.538030   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:38:53.553266   22139 system_svc.go:56] duration metric: took 15.26828ms WaitForService to wait for kubelet
	I0812 10:38:53.553295   22139 kubeadm.go:582] duration metric: took 21.647791829s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:38:53.553316   22139 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:38:53.727714   22139 request.go:629] Waited for 174.32901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0812 10:38:53.727770   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0812 10:38:53.727775   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.727782   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.727786   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.732104   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:53.733158   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:38:53.733182   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:38:53.733201   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:38:53.733205   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:38:53.733209   22139 node_conditions.go:105] duration metric: took 179.887884ms to run NodePressure ...
	I0812 10:38:53.733227   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:38:53.733261   22139 start.go:255] writing updated cluster config ...
	I0812 10:38:53.735677   22139 out.go:177] 
	I0812 10:38:53.737271   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:38:53.737407   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:38:53.739264   22139 out.go:177] * Starting "ha-919901-m03" control-plane node in "ha-919901" cluster
	I0812 10:38:53.740850   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:38:53.740902   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:38:53.741013   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:38:53.741029   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:38:53.741144   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:38:53.741371   22139 start.go:360] acquireMachinesLock for ha-919901-m03: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:38:53.741418   22139 start.go:364] duration metric: took 26.493µs to acquireMachinesLock for "ha-919901-m03"
	I0812 10:38:53.741441   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:53.741573   22139 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0812 10:38:53.743401   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:38:53.743491   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:38:53.743524   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:38:53.758500   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0812 10:38:53.758936   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:38:53.759439   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:38:53.759461   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:38:53.759847   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:38:53.760039   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:38:53.760203   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:38:53.760400   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:38:53.760425   22139 client.go:168] LocalClient.Create starting
	I0812 10:38:53.760456   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:38:53.760488   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:38:53.760503   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:38:53.760550   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:38:53.760568   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:38:53.760581   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:38:53.760599   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:38:53.760607   22139 main.go:141] libmachine: (ha-919901-m03) Calling .PreCreateCheck
	I0812 10:38:53.760845   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:38:53.761340   22139 main.go:141] libmachine: Creating machine...
	I0812 10:38:53.761353   22139 main.go:141] libmachine: (ha-919901-m03) Calling .Create
	I0812 10:38:53.761491   22139 main.go:141] libmachine: (ha-919901-m03) Creating KVM machine...
	I0812 10:38:53.762838   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found existing default KVM network
	I0812 10:38:53.762960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found existing private KVM network mk-ha-919901
	I0812 10:38:53.763143   22139 main.go:141] libmachine: (ha-919901-m03) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 ...
	I0812 10:38:53.763170   22139 main.go:141] libmachine: (ha-919901-m03) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:38:53.763238   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:53.763134   23028 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:38:53.763388   22139 main.go:141] libmachine: (ha-919901-m03) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:38:53.996979   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:53.996832   23028 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa...
	I0812 10:38:54.081688   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:54.081557   23028 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/ha-919901-m03.rawdisk...
	I0812 10:38:54.081714   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Writing magic tar header
	I0812 10:38:54.081729   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Writing SSH key tar header
	I0812 10:38:54.081742   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:54.081686   23028 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 ...
	I0812 10:38:54.081770   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03
	I0812 10:38:54.081830   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 (perms=drwx------)
	I0812 10:38:54.081849   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:38:54.081858   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:38:54.081868   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:38:54.081885   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:38:54.081896   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:38:54.081910   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:38:54.081920   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:38:54.081930   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home
	I0812 10:38:54.081941   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Skipping /home - not owner
	I0812 10:38:54.081955   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:38:54.081967   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:38:54.082002   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:38:54.082027   22139 main.go:141] libmachine: (ha-919901-m03) Creating domain...
	I0812 10:38:54.082952   22139 main.go:141] libmachine: (ha-919901-m03) define libvirt domain using xml: 
	I0812 10:38:54.082970   22139 main.go:141] libmachine: (ha-919901-m03) <domain type='kvm'>
	I0812 10:38:54.082977   22139 main.go:141] libmachine: (ha-919901-m03)   <name>ha-919901-m03</name>
	I0812 10:38:54.082986   22139 main.go:141] libmachine: (ha-919901-m03)   <memory unit='MiB'>2200</memory>
	I0812 10:38:54.082991   22139 main.go:141] libmachine: (ha-919901-m03)   <vcpu>2</vcpu>
	I0812 10:38:54.083000   22139 main.go:141] libmachine: (ha-919901-m03)   <features>
	I0812 10:38:54.083005   22139 main.go:141] libmachine: (ha-919901-m03)     <acpi/>
	I0812 10:38:54.083012   22139 main.go:141] libmachine: (ha-919901-m03)     <apic/>
	I0812 10:38:54.083017   22139 main.go:141] libmachine: (ha-919901-m03)     <pae/>
	I0812 10:38:54.083025   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083030   22139 main.go:141] libmachine: (ha-919901-m03)   </features>
	I0812 10:38:54.083035   22139 main.go:141] libmachine: (ha-919901-m03)   <cpu mode='host-passthrough'>
	I0812 10:38:54.083041   22139 main.go:141] libmachine: (ha-919901-m03)   
	I0812 10:38:54.083052   22139 main.go:141] libmachine: (ha-919901-m03)   </cpu>
	I0812 10:38:54.083078   22139 main.go:141] libmachine: (ha-919901-m03)   <os>
	I0812 10:38:54.083101   22139 main.go:141] libmachine: (ha-919901-m03)     <type>hvm</type>
	I0812 10:38:54.083112   22139 main.go:141] libmachine: (ha-919901-m03)     <boot dev='cdrom'/>
	I0812 10:38:54.083124   22139 main.go:141] libmachine: (ha-919901-m03)     <boot dev='hd'/>
	I0812 10:38:54.083134   22139 main.go:141] libmachine: (ha-919901-m03)     <bootmenu enable='no'/>
	I0812 10:38:54.083145   22139 main.go:141] libmachine: (ha-919901-m03)   </os>
	I0812 10:38:54.083167   22139 main.go:141] libmachine: (ha-919901-m03)   <devices>
	I0812 10:38:54.083187   22139 main.go:141] libmachine: (ha-919901-m03)     <disk type='file' device='cdrom'>
	I0812 10:38:54.083202   22139 main.go:141] libmachine: (ha-919901-m03)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/boot2docker.iso'/>
	I0812 10:38:54.083213   22139 main.go:141] libmachine: (ha-919901-m03)       <target dev='hdc' bus='scsi'/>
	I0812 10:38:54.083223   22139 main.go:141] libmachine: (ha-919901-m03)       <readonly/>
	I0812 10:38:54.083233   22139 main.go:141] libmachine: (ha-919901-m03)     </disk>
	I0812 10:38:54.083245   22139 main.go:141] libmachine: (ha-919901-m03)     <disk type='file' device='disk'>
	I0812 10:38:54.083262   22139 main.go:141] libmachine: (ha-919901-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:38:54.083279   22139 main.go:141] libmachine: (ha-919901-m03)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/ha-919901-m03.rawdisk'/>
	I0812 10:38:54.083290   22139 main.go:141] libmachine: (ha-919901-m03)       <target dev='hda' bus='virtio'/>
	I0812 10:38:54.083302   22139 main.go:141] libmachine: (ha-919901-m03)     </disk>
	I0812 10:38:54.083313   22139 main.go:141] libmachine: (ha-919901-m03)     <interface type='network'>
	I0812 10:38:54.083323   22139 main.go:141] libmachine: (ha-919901-m03)       <source network='mk-ha-919901'/>
	I0812 10:38:54.083333   22139 main.go:141] libmachine: (ha-919901-m03)       <model type='virtio'/>
	I0812 10:38:54.083345   22139 main.go:141] libmachine: (ha-919901-m03)     </interface>
	I0812 10:38:54.083356   22139 main.go:141] libmachine: (ha-919901-m03)     <interface type='network'>
	I0812 10:38:54.083370   22139 main.go:141] libmachine: (ha-919901-m03)       <source network='default'/>
	I0812 10:38:54.083380   22139 main.go:141] libmachine: (ha-919901-m03)       <model type='virtio'/>
	I0812 10:38:54.083391   22139 main.go:141] libmachine: (ha-919901-m03)     </interface>
	I0812 10:38:54.083401   22139 main.go:141] libmachine: (ha-919901-m03)     <serial type='pty'>
	I0812 10:38:54.083411   22139 main.go:141] libmachine: (ha-919901-m03)       <target port='0'/>
	I0812 10:38:54.083420   22139 main.go:141] libmachine: (ha-919901-m03)     </serial>
	I0812 10:38:54.083432   22139 main.go:141] libmachine: (ha-919901-m03)     <console type='pty'>
	I0812 10:38:54.083443   22139 main.go:141] libmachine: (ha-919901-m03)       <target type='serial' port='0'/>
	I0812 10:38:54.083453   22139 main.go:141] libmachine: (ha-919901-m03)     </console>
	I0812 10:38:54.083464   22139 main.go:141] libmachine: (ha-919901-m03)     <rng model='virtio'>
	I0812 10:38:54.083476   22139 main.go:141] libmachine: (ha-919901-m03)       <backend model='random'>/dev/random</backend>
	I0812 10:38:54.083488   22139 main.go:141] libmachine: (ha-919901-m03)     </rng>
	I0812 10:38:54.083498   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083507   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083517   22139 main.go:141] libmachine: (ha-919901-m03)   </devices>
	I0812 10:38:54.083528   22139 main.go:141] libmachine: (ha-919901-m03) </domain>
	I0812 10:38:54.083541   22139 main.go:141] libmachine: (ha-919901-m03) 
	I0812 10:38:54.090431   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:48:dd:bb in network default
	I0812 10:38:54.090921   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring networks are active...
	I0812 10:38:54.090948   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:54.091665   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring network default is active
	I0812 10:38:54.092020   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring network mk-ha-919901 is active
	I0812 10:38:54.092425   22139 main.go:141] libmachine: (ha-919901-m03) Getting domain xml...
	I0812 10:38:54.093233   22139 main.go:141] libmachine: (ha-919901-m03) Creating domain...
	I0812 10:38:55.394561   22139 main.go:141] libmachine: (ha-919901-m03) Waiting to get IP...
	I0812 10:38:55.395496   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:55.395903   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:55.395961   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:55.395890   23028 retry.go:31] will retry after 248.022365ms: waiting for machine to come up
	I0812 10:38:55.645744   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:55.646146   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:55.646183   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:55.646096   23028 retry.go:31] will retry after 385.515989ms: waiting for machine to come up
	I0812 10:38:56.033819   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.034351   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.034379   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.034303   23028 retry.go:31] will retry after 394.859232ms: waiting for machine to come up
	I0812 10:38:56.430996   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.431612   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.431635   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.431557   23028 retry.go:31] will retry after 515.927915ms: waiting for machine to come up
	I0812 10:38:56.949288   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.949840   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.949873   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.949755   23028 retry.go:31] will retry after 615.89923ms: waiting for machine to come up
	I0812 10:38:57.567348   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:57.567863   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:57.567882   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:57.567815   23028 retry.go:31] will retry after 824.248304ms: waiting for machine to come up
	I0812 10:38:58.393522   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:58.394025   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:58.394053   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:58.393972   23028 retry.go:31] will retry after 903.663556ms: waiting for machine to come up
	I0812 10:38:59.299460   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:59.299991   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:59.300022   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:59.299956   23028 retry.go:31] will retry after 943.185292ms: waiting for machine to come up
	I0812 10:39:00.244291   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:00.244745   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:00.244774   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:00.244692   23028 retry.go:31] will retry after 1.75910003s: waiting for machine to come up
	I0812 10:39:02.006042   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:02.006370   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:02.006396   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:02.006341   23028 retry.go:31] will retry after 1.468388382s: waiting for machine to come up
	I0812 10:39:03.476095   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:03.476591   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:03.476623   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:03.476562   23028 retry.go:31] will retry after 2.072007383s: waiting for machine to come up
	I0812 10:39:05.550334   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:05.550976   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:05.551009   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:05.550923   23028 retry.go:31] will retry after 2.406978667s: waiting for machine to come up
	I0812 10:39:07.959093   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:07.959428   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:07.959458   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:07.959381   23028 retry.go:31] will retry after 4.191781323s: waiting for machine to come up
	I0812 10:39:12.154110   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:12.154496   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:12.154526   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:12.154461   23028 retry.go:31] will retry after 3.475577868s: waiting for machine to come up
	I0812 10:39:15.632234   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.632880   22139 main.go:141] libmachine: (ha-919901-m03) Found IP for machine: 192.168.39.195
	I0812 10:39:15.632905   22139 main.go:141] libmachine: (ha-919901-m03) Reserving static IP address...
	I0812 10:39:15.632920   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has current primary IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.633322   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find host DHCP lease matching {name: "ha-919901-m03", mac: "52:54:00:0f:9a:b2", ip: "192.168.39.195"} in network mk-ha-919901
	I0812 10:39:15.708534   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Getting to WaitForSSH function...
	I0812 10:39:15.708581   22139 main.go:141] libmachine: (ha-919901-m03) Reserved static IP address: 192.168.39.195
	I0812 10:39:15.708615   22139 main.go:141] libmachine: (ha-919901-m03) Waiting for SSH to be available...
	I0812 10:39:15.711497   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.711915   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901
	I0812 10:39:15.711943   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find defined IP address of network mk-ha-919901 interface with MAC address 52:54:00:0f:9a:b2
	I0812 10:39:15.712133   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH client type: external
	I0812 10:39:15.712161   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa (-rw-------)
	I0812 10:39:15.712188   22139 main.go:141] libmachine: (ha-919901-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:39:15.712200   22139 main.go:141] libmachine: (ha-919901-m03) DBG | About to run SSH command:
	I0812 10:39:15.712218   22139 main.go:141] libmachine: (ha-919901-m03) DBG | exit 0
	I0812 10:39:15.716992   22139 main.go:141] libmachine: (ha-919901-m03) DBG | SSH cmd err, output: exit status 255: 
	I0812 10:39:15.717011   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0812 10:39:15.717020   22139 main.go:141] libmachine: (ha-919901-m03) DBG | command : exit 0
	I0812 10:39:15.717025   22139 main.go:141] libmachine: (ha-919901-m03) DBG | err     : exit status 255
	I0812 10:39:15.717032   22139 main.go:141] libmachine: (ha-919901-m03) DBG | output  : 
	I0812 10:39:18.719150   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Getting to WaitForSSH function...
	I0812 10:39:18.722036   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.722549   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.722571   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.722744   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH client type: external
	I0812 10:39:18.722808   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa (-rw-------)
	I0812 10:39:18.722840   22139 main.go:141] libmachine: (ha-919901-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:39:18.722858   22139 main.go:141] libmachine: (ha-919901-m03) DBG | About to run SSH command:
	I0812 10:39:18.722886   22139 main.go:141] libmachine: (ha-919901-m03) DBG | exit 0
	I0812 10:39:18.853015   22139 main.go:141] libmachine: (ha-919901-m03) DBG | SSH cmd err, output: <nil>: 
	I0812 10:39:18.853304   22139 main.go:141] libmachine: (ha-919901-m03) KVM machine creation complete!
	I0812 10:39:18.853693   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:39:18.854248   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:18.854455   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:18.854659   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:39:18.854676   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:39:18.856425   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:39:18.856443   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:39:18.856456   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:39:18.856464   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:18.859008   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.859405   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.859434   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.859574   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:18.859732   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.859882   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.860046   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:18.860210   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:18.860481   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:18.860502   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:39:18.968298   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:39:18.968329   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:39:18.968337   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:18.971304   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.971798   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.971829   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.971981   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:18.972220   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.972450   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.972629   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:18.972874   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:18.973052   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:18.973063   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:39:19.085740   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:39:19.085861   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:39:19.085877   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:39:19.085888   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.086165   22139 buildroot.go:166] provisioning hostname "ha-919901-m03"
	I0812 10:39:19.086189   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.086402   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.089552   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.089931   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.089960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.090086   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.090280   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.090452   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.090612   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.090783   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.090965   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.090978   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901-m03 && echo "ha-919901-m03" | sudo tee /etc/hostname
	I0812 10:39:19.216661   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901-m03
	
	I0812 10:39:19.216698   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.219545   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.219866   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.219896   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.220055   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.220222   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.220364   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.220509   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.220667   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.220916   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.220938   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:39:19.337276   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:39:19.337308   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:39:19.337327   22139 buildroot.go:174] setting up certificates
	I0812 10:39:19.337337   22139 provision.go:84] configureAuth start
	I0812 10:39:19.337352   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.337715   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:19.340775   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.341169   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.341198   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.341393   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.343688   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.344068   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.344098   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.344201   22139 provision.go:143] copyHostCerts
	I0812 10:39:19.344230   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:39:19.344262   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:39:19.344271   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:39:19.344340   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:39:19.344440   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:39:19.344458   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:39:19.344462   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:39:19.344488   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:39:19.344531   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:39:19.344547   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:39:19.344553   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:39:19.344572   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:39:19.344619   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901-m03 san=[127.0.0.1 192.168.39.195 ha-919901-m03 localhost minikube]
	I0812 10:39:19.600625   22139 provision.go:177] copyRemoteCerts
	I0812 10:39:19.600685   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:39:19.600708   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.603841   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.604190   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.604216   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.604411   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.604773   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.605047   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.605222   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:19.691643   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:39:19.691720   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:39:19.715320   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:39:19.715401   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:39:19.740178   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:39:19.740252   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:39:19.764374   22139 provision.go:87] duration metric: took 427.021932ms to configureAuth
	I0812 10:39:19.764400   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:39:19.764648   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:19.764731   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.767376   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.767877   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.767909   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.768130   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.768369   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.768531   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.768746   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.768961   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.769167   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.769188   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:39:20.033806   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:39:20.033838   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:39:20.033847   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetURL
	I0812 10:39:20.035217   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using libvirt version 6000000
	I0812 10:39:20.037589   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.037945   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.037973   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.038159   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:39:20.038177   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:39:20.038184   22139 client.go:171] duration metric: took 26.277750614s to LocalClient.Create
	I0812 10:39:20.038211   22139 start.go:167] duration metric: took 26.277813055s to libmachine.API.Create "ha-919901"
	I0812 10:39:20.038220   22139 start.go:293] postStartSetup for "ha-919901-m03" (driver="kvm2")
	I0812 10:39:20.038230   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:39:20.038245   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.038480   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:39:20.038506   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.040937   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.041236   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.041265   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.041434   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.041633   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.041805   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.041959   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.131924   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:39:20.136138   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:39:20.136162   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:39:20.136226   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:39:20.136293   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:39:20.136306   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:39:20.136393   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:39:20.146030   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:39:20.169471   22139 start.go:296] duration metric: took 131.237417ms for postStartSetup
	I0812 10:39:20.169531   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:39:20.170199   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:20.172820   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.173236   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.173263   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.173599   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:39:20.173821   22139 start.go:128] duration metric: took 26.432236244s to createHost
	I0812 10:39:20.173854   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.175960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.176365   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.176408   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.176544   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.176715   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.176874   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.177027   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.177178   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:20.177332   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:20.177342   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:39:20.293681   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459160.273858948
	
	I0812 10:39:20.293710   22139 fix.go:216] guest clock: 1723459160.273858948
	I0812 10:39:20.293720   22139 fix.go:229] Guest: 2024-08-12 10:39:20.273858948 +0000 UTC Remote: 2024-08-12 10:39:20.173842555 +0000 UTC m=+163.958020195 (delta=100.016393ms)
	I0812 10:39:20.293742   22139 fix.go:200] guest clock delta is within tolerance: 100.016393ms
	I0812 10:39:20.293750   22139 start.go:83] releasing machines lock for "ha-919901-m03", held for 26.552323997s
	I0812 10:39:20.293775   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.294056   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:20.296860   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.297227   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.297264   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.299508   22139 out.go:177] * Found network options:
	I0812 10:39:20.300819   22139 out.go:177]   - NO_PROXY=192.168.39.5,192.168.39.139
	W0812 10:39:20.302196   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 10:39:20.302219   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:39:20.302233   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.302856   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.303071   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.303173   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:39:20.303212   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	W0812 10:39:20.303256   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 10:39:20.303280   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:39:20.303402   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:39:20.303425   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.306293   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306503   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306714   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.306742   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306859   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.306968   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.306991   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.307040   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.307177   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.307196   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.307329   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.307385   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.307446   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.307581   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.548206   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:39:20.555158   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:39:20.555236   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:39:20.571703   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:39:20.571733   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:39:20.571791   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:39:20.589054   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:39:20.603071   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:39:20.603140   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:39:20.616927   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:39:20.630567   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:39:20.751978   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:39:20.915733   22139 docker.go:233] disabling docker service ...
	I0812 10:39:20.915796   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:39:20.932763   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:39:20.946267   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:39:21.059648   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:39:21.173353   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:39:21.188021   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:39:21.206027   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:39:21.206094   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.216780   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:39:21.216837   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.226789   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.236799   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.247259   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:39:21.257537   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.269428   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.285727   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.295562   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:39:21.304501   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:39:21.304551   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:39:21.317231   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:39:21.326612   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:21.454574   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:39:21.610379   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:39:21.610472   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:39:21.615359   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:39:21.615424   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:39:21.619180   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:39:21.661781   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:39:21.661873   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:39:21.692811   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:39:21.724072   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:39:21.725652   22139 out.go:177]   - env NO_PROXY=192.168.39.5
	I0812 10:39:21.727085   22139 out.go:177]   - env NO_PROXY=192.168.39.5,192.168.39.139
	I0812 10:39:21.728231   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:21.731239   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:21.731608   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:21.731632   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:21.731882   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:39:21.736056   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:39:21.749288   22139 mustload.go:65] Loading cluster: ha-919901
	I0812 10:39:21.749598   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:21.749928   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:21.749967   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:21.765319   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0812 10:39:21.765738   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:21.766171   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:21.766192   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:21.766505   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:21.766724   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:39:21.768368   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:39:21.768657   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:21.768689   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:21.783620   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0812 10:39:21.784033   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:21.784486   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:21.784520   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:21.784825   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:21.785024   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:39:21.785254   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.195
	I0812 10:39:21.785268   22139 certs.go:194] generating shared ca certs ...
	I0812 10:39:21.785282   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.785451   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:39:21.785491   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:39:21.785502   22139 certs.go:256] generating profile certs ...
	I0812 10:39:21.785585   22139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:39:21.785612   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e
	I0812 10:39:21.785634   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.195 192.168.39.254]
	I0812 10:39:21.949137   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e ...
	I0812 10:39:21.949173   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e: {Name:mk5171e305f991d45c655793a063dad5dfd92062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.949359   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e ...
	I0812 10:39:21.949377   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e: {Name:mk6d344a5c88c0ce65418b3d5eadf67a5c800f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.949481   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:39:21.949636   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:39:21.949790   22139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:39:21.949808   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:39:21.949827   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:39:21.949847   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:39:21.949866   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:39:21.949885   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:39:21.949903   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:39:21.949921   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:39:21.949938   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:39:21.949997   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:39:21.950036   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:39:21.950050   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:39:21.950083   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:39:21.950115   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:39:21.950146   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:39:21.950198   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:39:21.950234   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:21.950254   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:39:21.950272   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:39:21.950312   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:39:21.953769   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:21.954394   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:39:21.954416   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:21.954692   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:39:21.954903   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:39:21.955062   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:39:21.955240   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:39:22.029272   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 10:39:22.035774   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 10:39:22.047549   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 10:39:22.051516   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0812 10:39:22.062049   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 10:39:22.066010   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 10:39:22.076435   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 10:39:22.080674   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 10:39:22.093101   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 10:39:22.097110   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 10:39:22.107954   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 10:39:22.111581   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 10:39:22.122165   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:39:22.145850   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:39:22.167788   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:39:22.191295   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:39:22.217242   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0812 10:39:22.240482   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:39:22.264083   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:39:22.287415   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:39:22.311289   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:39:22.334555   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:39:22.356979   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:39:22.379881   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 10:39:22.396722   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0812 10:39:22.414597   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 10:39:22.431326   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 10:39:22.449267   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 10:39:22.465456   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 10:39:22.481885   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 10:39:22.497980   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:39:22.503469   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:39:22.514150   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.518570   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.518619   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.524075   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:39:22.534675   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:39:22.545520   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.549823   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.549879   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.555414   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:39:22.566319   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:39:22.576970   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.581430   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.581501   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.587536   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:39:22.598543   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:39:22.602642   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:39:22.602709   22139 kubeadm.go:934] updating node {m03 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0812 10:39:22.602788   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:39:22.602814   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:39:22.602851   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:39:22.619658   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:39:22.619739   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:39:22.619808   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:39:22.629510   22139 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 10:39:22.629588   22139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 10:39:22.638674   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0812 10:39:22.638706   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 10:39:22.638723   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:39:22.638728   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:39:22.638674   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0812 10:39:22.638784   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:39:22.638787   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:39:22.638864   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:39:22.656137   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:39:22.656203   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 10:39:22.656245   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 10:39:22.656266   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 10:39:22.656297   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 10:39:22.656247   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:39:22.681531   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 10:39:22.681580   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 10:39:23.554251   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 10:39:23.564465   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:39:23.583010   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:39:23.600680   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:39:23.618445   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:39:23.622366   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:39:23.634628   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:23.753923   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:39:23.770529   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:39:23.770918   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:23.770966   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:23.789842   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0812 10:39:23.790324   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:23.790831   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:23.790854   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:23.791214   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:23.791426   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:39:23.791569   22139 start.go:317] joinCluster: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:39:23.791689   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 10:39:23.791707   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:39:23.794805   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:23.795259   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:39:23.795296   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:23.795403   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:39:23.795640   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:39:23.795826   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:39:23.795980   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:39:23.966445   22139 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:39:23.966482   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f9003j.6i2ogw8a6w17yk3t --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m03 --control-plane --apiserver-advertise-address=192.168.39.195 --apiserver-bind-port=8443"
	I0812 10:39:47.821276   22139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f9003j.6i2ogw8a6w17yk3t --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m03 --control-plane --apiserver-advertise-address=192.168.39.195 --apiserver-bind-port=8443": (23.85475962s)
	I0812 10:39:47.821324   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 10:39:48.432646   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901-m03 minikube.k8s.io/updated_at=2024_08_12T10_39_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=false
	I0812 10:39:48.559096   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-919901-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 10:39:48.681854   22139 start.go:319] duration metric: took 24.890280586s to joinCluster
	I0812 10:39:48.681992   22139 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:39:48.682338   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:48.683772   22139 out.go:177] * Verifying Kubernetes components...
	I0812 10:39:48.685350   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:48.974620   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:39:49.044155   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:39:49.044439   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 10:39:49.044496   22139 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0812 10:39:49.044728   22139 node_ready.go:35] waiting up to 6m0s for node "ha-919901-m03" to be "Ready" ...
	I0812 10:39:49.044811   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:49.044822   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:49.044832   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:49.044838   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:49.048172   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:49.545024   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:49.545045   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:49.545054   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:49.545061   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:49.553804   22139 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0812 10:39:50.045979   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:50.046020   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:50.046033   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:50.046044   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:50.050363   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:39:50.545032   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:50.545051   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:50.545060   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:50.545064   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:50.554965   22139 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 10:39:51.045860   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:51.045881   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:51.045890   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:51.045896   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:51.049642   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:51.050320   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:51.545456   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:51.545482   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:51.545493   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:51.545499   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:51.549297   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:52.044953   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:52.044981   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:52.045006   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:52.045014   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:52.048263   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:52.545777   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:52.545795   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:52.545803   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:52.545808   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:52.549410   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.045058   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:53.045081   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:53.045089   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:53.045092   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:53.048507   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.545314   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:53.545353   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:53.545362   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:53.545367   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:53.549047   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.549963   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:54.045209   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:54.045233   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:54.045243   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:54.045248   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:54.048625   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:54.545642   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:54.545677   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:54.545689   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:54.545696   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:54.549691   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:55.045500   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:55.045521   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:55.045529   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:55.045533   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:55.049104   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:55.545128   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:55.545158   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:55.545167   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:55.545174   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:55.631274   22139 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0812 10:39:55.632236   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:56.045539   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:56.045566   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:56.045578   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:56.045585   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:56.048857   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:56.545777   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:56.545802   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:56.545814   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:56.545820   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:56.549521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:57.045521   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:57.045544   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:57.045552   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:57.045556   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:57.049336   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:57.545823   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:57.545848   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:57.545860   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:57.545866   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:57.549847   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:58.045617   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:58.045641   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:58.045649   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:58.045654   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:58.049059   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:58.049903   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:58.545128   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:58.545150   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:58.545161   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:58.545167   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:58.548940   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:59.045945   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:59.045976   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:59.045984   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:59.045991   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:59.049272   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:59.545049   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:59.545074   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:59.545081   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:59.545085   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:59.548633   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.045573   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:00.045597   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:00.045608   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:00.045614   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:00.048944   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.544947   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:00.544972   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:00.544988   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:00.544995   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:00.548418   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.549075   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:01.045466   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:01.045509   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:01.045520   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:01.045527   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:01.049225   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:01.545827   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:01.545850   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:01.545861   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:01.545866   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:01.549774   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.045839   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:02.045862   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:02.045870   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:02.045873   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:02.049216   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.545047   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:02.545081   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:02.545089   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:02.545093   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:02.548701   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.549430   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:03.045819   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:03.045842   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:03.045848   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:03.045853   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:03.049420   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:03.545321   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:03.545343   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:03.545353   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:03.545358   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:03.548983   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.045753   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:04.045775   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:04.045783   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:04.045786   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:04.048983   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.545909   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:04.545938   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:04.545948   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:04.545952   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:04.549225   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.549750   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:05.045124   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:05.045146   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:05.045153   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:05.045157   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:05.048385   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:05.545052   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:05.545077   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:05.545088   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:05.545095   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:05.549732   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:40:06.045845   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:06.045867   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:06.045878   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:06.045883   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:06.049809   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:06.545235   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:06.545279   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:06.545289   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:06.545293   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:06.548818   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.045650   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.045684   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.045694   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.045704   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.049409   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.050589   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:07.545015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.545051   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.545059   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.545063   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.548434   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.549092   22139 node_ready.go:49] node "ha-919901-m03" has status "Ready":"True"
	I0812 10:40:07.549116   22139 node_ready.go:38] duration metric: took 18.504372406s for node "ha-919901-m03" to be "Ready" ...
	I0812 10:40:07.549129   22139 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:40:07.549191   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:07.549200   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.549207   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.549211   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.556054   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:07.562760   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.562865   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rc7cl
	I0812 10:40:07.562874   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.562882   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.562886   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.566516   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.567337   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.567352   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.567359   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.567364   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.570320   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.570849   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.570868   22139 pod_ready.go:81] duration metric: took 8.078681ms for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.570880   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.570940   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wstd4
	I0812 10:40:07.570950   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.570959   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.570967   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.573966   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.574787   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.574803   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.574810   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.574814   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.577707   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.578355   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.578375   22139 pod_ready.go:81] duration metric: took 7.487916ms for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.578386   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.578458   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901
	I0812 10:40:07.578469   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.578476   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.578480   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.581268   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.581792   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.581806   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.581812   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.581816   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.584654   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.585253   22139 pod_ready.go:92] pod "etcd-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.585273   22139 pod_ready.go:81] duration metric: took 6.878189ms for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.585287   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.585354   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m02
	I0812 10:40:07.585363   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.585373   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.585381   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.588128   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.588717   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:07.588731   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.588738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.588741   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.591951   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.592782   22139 pod_ready.go:92] pod "etcd-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.592805   22139 pod_ready.go:81] duration metric: took 7.50856ms for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.592818   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.745151   22139 request.go:629] Waited for 152.258306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m03
	I0812 10:40:07.745239   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m03
	I0812 10:40:07.745250   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.745257   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.745266   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.748628   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.945521   22139 request.go:629] Waited for 196.390149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.945612   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.945635   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.945647   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.945662   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.949009   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.949668   22139 pod_ready.go:92] pod "etcd-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.949688   22139 pod_ready.go:81] duration metric: took 356.862793ms for pod "etcd-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.949709   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.145413   22139 request.go:629] Waited for 195.623441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:40:08.145470   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:40:08.145475   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.145482   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.145487   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.148840   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.346104   22139 request.go:629] Waited for 196.419769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:08.346157   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:08.346162   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.346169   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.346172   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.349269   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.349915   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:08.349934   22139 pod_ready.go:81] duration metric: took 400.217619ms for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.349962   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.545517   22139 request.go:629] Waited for 195.481494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:40:08.545601   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:40:08.545607   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.545615   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.545622   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.549619   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.745193   22139 request.go:629] Waited for 194.311263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:08.745273   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:08.745281   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.745315   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.745321   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.748900   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.749608   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:08.749629   22139 pod_ready.go:81] duration metric: took 399.659166ms for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.749639   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.945644   22139 request.go:629] Waited for 195.924629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m03
	I0812 10:40:08.945702   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m03
	I0812 10:40:08.945708   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.945717   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.945722   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.949521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.145627   22139 request.go:629] Waited for 195.367609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:09.145703   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:09.145710   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.145721   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.145727   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.149187   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.149675   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.149692   22139 pod_ready.go:81] duration metric: took 400.047769ms for pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.149701   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.345854   22139 request.go:629] Waited for 196.064636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:40:09.345913   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:40:09.345918   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.345925   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.345930   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.349312   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.545325   22139 request.go:629] Waited for 195.308979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:09.545400   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:09.545407   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.545418   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.545423   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.548980   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.549779   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.549798   22139 pod_ready.go:81] duration metric: took 400.090053ms for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.549808   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.746018   22139 request.go:629] Waited for 196.147849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:40:09.746105   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:40:09.746115   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.746125   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.746137   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.749873   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.946023   22139 request.go:629] Waited for 195.321492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:09.946092   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:09.946099   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.946109   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.946115   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.949468   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.950018   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.950040   22139 pod_ready.go:81] duration metric: took 400.223629ms for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.950051   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.146046   22139 request.go:629] Waited for 195.931355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m03
	I0812 10:40:10.146109   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m03
	I0812 10:40:10.146114   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.146122   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.146127   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.149521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.345712   22139 request.go:629] Waited for 195.387623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.345789   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.345795   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.345803   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.345811   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.349722   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.350685   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:10.350710   22139 pod_ready.go:81] duration metric: took 400.651599ms for pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.350725   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xqjr" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.545742   22139 request.go:629] Waited for 194.940464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xqjr
	I0812 10:40:10.545805   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xqjr
	I0812 10:40:10.545811   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.545818   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.545822   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.549599   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.745644   22139 request.go:629] Waited for 195.345272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.745715   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.745720   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.745727   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.745730   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.749381   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.749899   22139 pod_ready.go:92] pod "kube-proxy-6xqjr" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:10.749916   22139 pod_ready.go:81] duration metric: took 399.184059ms for pod "kube-proxy-6xqjr" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.749926   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.946044   22139 request.go:629] Waited for 196.056707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:40:10.946111   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:40:10.946117   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.946129   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.946137   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.949676   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.145879   22139 request.go:629] Waited for 195.384898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:11.145967   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:11.145978   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.145985   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.145988   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.149064   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.149663   22139 pod_ready.go:92] pod "kube-proxy-cczfj" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.149680   22139 pod_ready.go:81] duration metric: took 399.748449ms for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.149689   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.345050   22139 request.go:629] Waited for 195.276304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:40:11.345120   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:40:11.345126   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.345134   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.345141   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.348419   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.545437   22139 request.go:629] Waited for 196.290149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.545494   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.545498   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.545506   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.545510   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.548860   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.549308   22139 pod_ready.go:92] pod "kube-proxy-ftvfl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.549326   22139 pod_ready.go:81] duration metric: took 399.631439ms for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.549335   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.745434   22139 request.go:629] Waited for 196.031432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:40:11.745507   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:40:11.745512   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.745519   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.745533   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.749044   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.945915   22139 request.go:629] Waited for 196.056401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.946015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.946028   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.946039   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.946047   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.949046   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:11.949770   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.949786   22139 pod_ready.go:81] duration metric: took 400.445415ms for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.949795   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.145772   22139 request.go:629] Waited for 195.913279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:40:12.145883   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:40:12.145893   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.145902   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.145913   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.149669   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.345718   22139 request.go:629] Waited for 195.386055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:12.345836   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:12.345858   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.345870   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.345879   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.349428   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.349955   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:12.349973   22139 pod_ready.go:81] duration metric: took 400.172097ms for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.349983   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.545083   22139 request.go:629] Waited for 195.036653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m03
	I0812 10:40:12.545173   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m03
	I0812 10:40:12.545185   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.545196   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.545201   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.548690   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.745765   22139 request.go:629] Waited for 196.391035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:12.745846   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:12.745857   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.745864   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.745868   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.749373   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.750288   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:12.750312   22139 pod_ready.go:81] duration metric: took 400.323333ms for pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.750323   22139 pod_ready.go:38] duration metric: took 5.201181989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:40:12.750354   22139 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:40:12.750463   22139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:40:12.767642   22139 api_server.go:72] duration metric: took 24.085611745s to wait for apiserver process to appear ...
	I0812 10:40:12.767674   22139 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:40:12.767702   22139 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0812 10:40:12.774553   22139 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0812 10:40:12.774683   22139 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0812 10:40:12.774697   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.774706   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.774714   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.775702   22139 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0812 10:40:12.775772   22139 api_server.go:141] control plane version: v1.30.3
	I0812 10:40:12.775789   22139 api_server.go:131] duration metric: took 8.106849ms to wait for apiserver health ...
	I0812 10:40:12.775802   22139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:40:12.946064   22139 request.go:629] Waited for 170.185941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:12.946156   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:12.946163   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.946173   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.946180   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.952972   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:12.959365   22139 system_pods.go:59] 24 kube-system pods found
	I0812 10:40:12.959414   22139 system_pods.go:61] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:40:12.959422   22139 system_pods.go:61] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:40:12.959427   22139 system_pods.go:61] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:40:12.959432   22139 system_pods.go:61] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:40:12.959437   22139 system_pods.go:61] "etcd-ha-919901-m03" [499957e0-c2b4-4a3c-9e52-933153a1c27e] Running
	I0812 10:40:12.959443   22139 system_pods.go:61] "kindnet-6v7rs" [43c3bf93-f498-4ea3-b494-a1f06e64e2d2] Running
	I0812 10:40:12.959447   22139 system_pods.go:61] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:40:12.959453   22139 system_pods.go:61] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:40:12.959458   22139 system_pods.go:61] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:40:12.959463   22139 system_pods.go:61] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:40:12.959476   22139 system_pods.go:61] "kube-apiserver-ha-919901-m03" [1c13201f-27e2-4987-bfc9-1c25f8e447bd] Running
	I0812 10:40:12.959481   22139 system_pods.go:61] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:40:12.959490   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:40:12.959496   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m03" [ef3b4e77-df48-48c0-a4b2-e9a1f1e64f70] Running
	I0812 10:40:12.959505   22139 system_pods.go:61] "kube-proxy-6xqjr" [013061ce-22f2-4c9c-991e-9a911c914ca4] Running
	I0812 10:40:12.959515   22139 system_pods.go:61] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:40:12.959520   22139 system_pods.go:61] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:40:12.959528   22139 system_pods.go:61] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:40:12.959533   22139 system_pods.go:61] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:40:12.959540   22139 system_pods.go:61] "kube-scheduler-ha-919901-m03" [712b2426-78f2-4560-a7a8-7af53da3c627] Running
	I0812 10:40:12.959546   22139 system_pods.go:61] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:40:12.959554   22139 system_pods.go:61] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:40:12.959561   22139 system_pods.go:61] "kube-vip-ha-919901-m03" [2e37e0c0-dbac-43f1-b7c8-141d6db6c191] Running
	I0812 10:40:12.959566   22139 system_pods.go:61] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:40:12.959575   22139 system_pods.go:74] duration metric: took 183.766982ms to wait for pod list to return data ...
	I0812 10:40:12.959588   22139 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:40:13.145977   22139 request.go:629] Waited for 186.296523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:40:13.146050   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:40:13.146060   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.146073   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.146083   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.149736   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:13.149861   22139 default_sa.go:45] found service account: "default"
	I0812 10:40:13.149880   22139 default_sa.go:55] duration metric: took 190.283977ms for default service account to be created ...
	I0812 10:40:13.149890   22139 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:40:13.345342   22139 request.go:629] Waited for 195.382281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:13.345400   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:13.345406   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.345413   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.345418   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.352358   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:13.358696   22139 system_pods.go:86] 24 kube-system pods found
	I0812 10:40:13.358727   22139 system_pods.go:89] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:40:13.358732   22139 system_pods.go:89] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:40:13.358737   22139 system_pods.go:89] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:40:13.358740   22139 system_pods.go:89] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:40:13.358745   22139 system_pods.go:89] "etcd-ha-919901-m03" [499957e0-c2b4-4a3c-9e52-933153a1c27e] Running
	I0812 10:40:13.358749   22139 system_pods.go:89] "kindnet-6v7rs" [43c3bf93-f498-4ea3-b494-a1f06e64e2d2] Running
	I0812 10:40:13.358753   22139 system_pods.go:89] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:40:13.358756   22139 system_pods.go:89] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:40:13.358762   22139 system_pods.go:89] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:40:13.358766   22139 system_pods.go:89] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:40:13.358770   22139 system_pods.go:89] "kube-apiserver-ha-919901-m03" [1c13201f-27e2-4987-bfc9-1c25f8e447bd] Running
	I0812 10:40:13.358774   22139 system_pods.go:89] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:40:13.358778   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:40:13.358784   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m03" [ef3b4e77-df48-48c0-a4b2-e9a1f1e64f70] Running
	I0812 10:40:13.358789   22139 system_pods.go:89] "kube-proxy-6xqjr" [013061ce-22f2-4c9c-991e-9a911c914ca4] Running
	I0812 10:40:13.358793   22139 system_pods.go:89] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:40:13.358797   22139 system_pods.go:89] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:40:13.358801   22139 system_pods.go:89] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:40:13.358804   22139 system_pods.go:89] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:40:13.358808   22139 system_pods.go:89] "kube-scheduler-ha-919901-m03" [712b2426-78f2-4560-a7a8-7af53da3c627] Running
	I0812 10:40:13.358812   22139 system_pods.go:89] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:40:13.358815   22139 system_pods.go:89] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:40:13.358818   22139 system_pods.go:89] "kube-vip-ha-919901-m03" [2e37e0c0-dbac-43f1-b7c8-141d6db6c191] Running
	I0812 10:40:13.358822   22139 system_pods.go:89] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:40:13.358827   22139 system_pods.go:126] duration metric: took 208.929081ms to wait for k8s-apps to be running ...
	I0812 10:40:13.358836   22139 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:40:13.358884   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:40:13.374275   22139 system_svc.go:56] duration metric: took 15.428513ms WaitForService to wait for kubelet
	I0812 10:40:13.374314   22139 kubeadm.go:582] duration metric: took 24.692286487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:40:13.374354   22139 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:40:13.545990   22139 request.go:629] Waited for 171.54847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0812 10:40:13.546055   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0812 10:40:13.546062   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.546073   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.546081   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.550219   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:40:13.551372   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551412   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551437   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551443   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551449   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551454   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551463   22139 node_conditions.go:105] duration metric: took 177.102596ms to run NodePressure ...
	I0812 10:40:13.551483   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:40:13.551512   22139 start.go:255] writing updated cluster config ...
	I0812 10:40:13.551918   22139 ssh_runner.go:195] Run: rm -f paused
	I0812 10:40:13.605291   22139 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 10:40:13.607605   22139 out.go:177] * Done! kubectl is now configured to use "ha-919901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.657569969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f57eacd1-e8c2-4f8e-9367-656e731c2b95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.657646298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f57eacd1-e8c2-4f8e-9367-656e731c2b95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.658166501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f57eacd1-e8c2-4f8e-9367-656e731c2b95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.698520129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=567b1d02-4a3b-4c99-890f-61ac477b9688 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.698592296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=567b1d02-4a3b-4c99-890f-61ac477b9688 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.700114696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=717edbaa-90ea-4089-a037-6809474016e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.700842775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459428700811148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=717edbaa-90ea-4089-a037-6809474016e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.701716657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7a01104-ccc7-4200-a224-66817780c8ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.701798044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7a01104-ccc7-4200-a224-66817780c8ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.702070799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7a01104-ccc7-4200-a224-66817780c8ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.739958922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=818b3866-2b85-4d93-b00a-1a8594e57daf name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.740035228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=818b3866-2b85-4d93-b00a-1a8594e57daf name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.741067483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1fca572-c04d-48f9-8b89-060d54e95f39 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.741644341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459428741617393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1fca572-c04d-48f9-8b89-060d54e95f39 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.742110441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be972015-ee32-46b0-8c58-6cb343af90b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.742175524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be972015-ee32-46b0-8c58-6cb343af90b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.742472763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be972015-ee32-46b0-8c58-6cb343af90b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.788090779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d03e3051-77e2-4eba-af78-dcf111294317 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.788166093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d03e3051-77e2-4eba-af78-dcf111294317 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.789649933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf61a287-ee07-497e-93b8-64dc300a75a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.790116681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459428790093331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf61a287-ee07-497e-93b8-64dc300a75a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.790620886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29e31a6c-c6c1-479b-aa96-1269c18b974d name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.790693208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29e31a6c-c6c1-479b-aa96-1269c18b974d name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.791520759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29e31a6c-c6c1-479b-aa96-1269c18b974d name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:43:48 ha-919901 crio[680]: time="2024-08-12 10:43:48.796718930Z" level=debug msg="received signal" file="crio/main.go:57" signal="broken pipe"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8542d2fe34f2b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   40dfaa461230a       busybox-fc5497c4f-pj8gg
	6d0c6b246369b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7ee3eb4b0b10e       coredns-7db6d8ff4d-wstd4
	ec7364f484b0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a88f690225d3f       coredns-7db6d8ff4d-rc7cl
	f0559eb25599b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   da089fb8954d6       storage-provisioner
	4d3c2394cc8cd       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    6 minutes ago       Running             kindnet-cni               0                   2abd5fefba6f3       kindnet-k5wz9
	7cd3e13fb2b3b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   b7d28551c45a6       kube-proxy-ftvfl
	52237e0a859ca       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   54a5959bc96a8       kube-vip-ha-919901
	2af78571207ce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   06243d97384e5       kube-scheduler-ha-919901
	0c30877cfdcca       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   fae04d253fe0c       etcd-ha-919901
	2b624c8fe2100       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   80f8c160f0149       kube-apiserver-ha-919901
	e76a506154546       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   f1ce2bfb06df9       kube-controller-manager-ha-919901
	
	
	==> coredns [6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8] <==
	[INFO] 10.244.0.4:56545 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000091906s
	[INFO] 10.244.0.4:43928 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000079555s
	[INFO] 10.244.2.2:33666 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000141234s
	[INFO] 10.244.2.2:40403 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000077505s
	[INFO] 10.244.2.2:60651 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001944453s
	[INFO] 10.244.1.2:41656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234118s
	[INFO] 10.244.1.2:37332 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027744s
	[INFO] 10.244.1.2:40223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010736666s
	[INFO] 10.244.0.4:34313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099644s
	[INFO] 10.244.0.4:42226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013952s
	[INFO] 10.244.0.4:57222 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017573s
	[INFO] 10.244.0.4:58894 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088282s
	[INFO] 10.244.2.2:46163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143718s
	[INFO] 10.244.2.2:51332 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158612s
	[INFO] 10.244.2.2:38508 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102467s
	[INFO] 10.244.1.2:36638 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127128s
	[INFO] 10.244.1.2:48634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196174s
	[INFO] 10.244.1.2:34717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153611s
	[INFO] 10.244.1.2:59132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121069s
	[INFO] 10.244.0.4:52263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018165s
	[INFO] 10.244.0.4:33949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137401s
	[INFO] 10.244.0.4:50775 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059871s
	[INFO] 10.244.2.2:49015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152696s
	[INFO] 10.244.2.2:39997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159415s
	[INFO] 10.244.2.2:33769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094484s
	
	
	==> coredns [ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b] <==
	[INFO] 10.244.1.2:40066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158597s
	[INFO] 10.244.1.2:59324 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176108s
	[INFO] 10.244.0.4:36927 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001973861s
	[INFO] 10.244.0.4:39495 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244693s
	[INFO] 10.244.0.4:42312 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071889s
	[INFO] 10.244.0.4:36852 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079487s
	[INFO] 10.244.2.2:51413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945024s
	[INFO] 10.244.2.2:47991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079163s
	[INFO] 10.244.2.2:37019 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001502663s
	[INFO] 10.244.2.2:54793 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077144s
	[INFO] 10.244.2.2:58782 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056455s
	[INFO] 10.244.1.2:54292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137507s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089729s
	[INFO] 10.244.0.4:40377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115376s
	[INFO] 10.244.0.4:56017 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088959s
	[INFO] 10.244.0.4:52411 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057997s
	[INFO] 10.244.0.4:46999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005214s
	[INFO] 10.244.2.2:42855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167607s
	[INFO] 10.244.2.2:43154 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117622s
	[INFO] 10.244.2.2:33056 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087079s
	[INFO] 10.244.2.2:52436 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114815s
	[INFO] 10.244.1.2:57727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129686s
	[INFO] 10.244.1.2:60878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018786s
	[INFO] 10.244.0.4:47644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114448s
	[INFO] 10.244.2.2:38930 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159722s
	
	
	==> describe nodes <==
	Name:               ha-919901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:43:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-919901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0604b91ac2ed4dfdb4f1eba3f89f2634
	  System UUID:                0604b91a-c2ed-4dfd-b4f1-eba3f89f2634
	  Boot ID:                    e69dd59d-8862-4943-a8be-e27de6624ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pj8gg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-7db6d8ff4d-rc7cl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 coredns-7db6d8ff4d-wstd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 etcd-ha-919901                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  kube-system                 kindnet-k5wz9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m22s
	  kube-system                 kube-apiserver-ha-919901             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-ha-919901    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-proxy-ftvfl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-scheduler-ha-919901             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-vip-ha-919901                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s  kubelet          Node ha-919901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s  kubelet          Node ha-919901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s  kubelet          Node ha-919901 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-919901 status is now: NodeReady
	  Normal  RegisteredNode           5m3s   node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal  RegisteredNode           3m46s  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	
	
	Name:               ha-919901-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:38:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:41:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-919901-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2d78288ee7d4cf8b54a7dd9f4bdd0a2
	  System UUID:                b2d78288-ee7d-4cf8-b54a-7dd9f4bdd0a2
	  Boot ID:                    fc484ec8-2cf0-4341-b6f0-32aea18b1ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-46rph                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-919901-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m19s
	  kube-system                 kindnet-8cqm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m21s
	  kube-system                 kube-apiserver-ha-919901-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-controller-manager-ha-919901-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-proxy-cczfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-ha-919901-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-vip-ha-919901-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           5m3s                   node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-919901-m02 status is now: NodeNotReady
	
	
	Name:               ha-919901-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_39_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:39:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:43:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-919901-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 018b12c9070f4bf48440eace9c0062df
	  System UUID:                018b12c9-070f-4bf4-8440-eace9c0062df
	  Boot ID:                    e9258875-f780-4a62-84da-f7421903e7ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v6ddx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-919901-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-6v7rs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-apiserver-ha-919901-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-controller-manager-ha-919901-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-6xqjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-ha-919901-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-vip-ha-919901-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-919901-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal  RegisteredNode           3m46s                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	
	
	Name:               ha-919901-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_40_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:40:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:43:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    ha-919901-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9924b3342904c65bcf17b38012b444a
	  System UUID:                d9924b33-4290-4c65-bcf1-7b38012b444a
	  Boot ID:                    04e52e72-fe17-4416-bddf-da5e40736490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-clr9b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-2h4vt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-919901-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug12 10:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050882] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.740086] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.846102] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.484807] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.272888] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.064986] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049228] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.190717] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.120674] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.278615] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Aug12 10:37] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +3.648433] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060066] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249848] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.088679] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.931862] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.868842] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 10:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14] <==
	{"level":"warn","ts":"2024-08-12T10:43:48.9928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.060427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.071522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.076689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.091564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.092454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.093172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.10082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.107471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.111686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.115011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.124454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.137003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.144082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.148806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.152506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.160912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.168985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.17689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.180769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.184774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.190562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.192583Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.197193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:43:49.203719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:43:49 up 7 min,  0 users,  load average: 0.29, 0.41, 0.24
	Linux ha-919901 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf] <==
	I0812 10:43:13.958757       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:43:23.961097       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:43:23.961159       1 main.go:299] handling current node
	I0812 10:43:23.961181       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:43:23.961188       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:43:23.961396       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:43:23.961431       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:43:23.961626       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:43:23.961672       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:43:33.951514       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:43:33.951630       1 main.go:299] handling current node
	I0812 10:43:33.951682       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:43:33.951733       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:43:33.951961       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:43:33.952049       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:43:33.952127       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:43:33.952146       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:43:43.952589       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:43:43.952637       1 main.go:299] handling current node
	I0812 10:43:43.952654       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:43:43.952662       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:43:43.952817       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:43:43.952836       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:43:43.952892       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:43:43.952898       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f] <==
	I0812 10:37:13.160923       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0812 10:37:13.174462       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
	I0812 10:37:13.176611       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 10:37:13.181941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 10:37:13.260864       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 10:37:14.337272       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 10:37:14.360762       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0812 10:37:14.504891       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 10:37:26.787949       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0812 10:37:27.466488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0812 10:40:18.956949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44182: use of closed network connection
	E0812 10:40:19.142427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44196: use of closed network connection
	E0812 10:40:19.346412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44216: use of closed network connection
	E0812 10:40:19.541746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44224: use of closed network connection
	E0812 10:40:19.719361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44238: use of closed network connection
	E0812 10:40:19.904586       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44244: use of closed network connection
	E0812 10:40:20.086113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44274: use of closed network connection
	E0812 10:40:20.278779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44292: use of closed network connection
	E0812 10:40:20.460778       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44310: use of closed network connection
	E0812 10:40:20.761979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46406: use of closed network connection
	E0812 10:40:20.936139       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46428: use of closed network connection
	E0812 10:40:21.162853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46438: use of closed network connection
	E0812 10:40:21.350699       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46458: use of closed network connection
	E0812 10:40:21.531307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46462: use of closed network connection
	E0812 10:40:21.716849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46490: use of closed network connection
	
	
	==> kube-controller-manager [e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e] <==
	I0812 10:39:45.372434       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-919901-m03" podCIDRs=["10.244.2.0/24"]
	I0812 10:39:46.766154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m03"
	I0812 10:40:14.595729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.558106ms"
	I0812 10:40:14.727932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.019982ms"
	I0812 10:40:14.902293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="173.295981ms"
	I0812 10:40:15.008810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.469116ms"
	E0812 10:40:15.008860       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:40:15.009076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="138.079µs"
	I0812 10:40:15.016147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.267µs"
	I0812 10:40:15.282291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.327µs"
	I0812 10:40:18.258747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.598612ms"
	I0812 10:40:18.259274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.915µs"
	I0812 10:40:18.291900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.247096ms"
	I0812 10:40:18.293628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.082µs"
	I0812 10:40:18.495732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.563624ms"
	I0812 10:40:18.496958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.896µs"
	E0812 10:40:48.092722       1 certificate_controller.go:146] Sync csr-cvlct failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-cvlct": the object has been modified; please apply your changes to the latest version and try again
	E0812 10:40:48.102197       1 certificate_controller.go:146] Sync csr-cvlct failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-cvlct": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:40:48.366957       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-919901-m04\" does not exist"
	I0812 10:40:48.414765       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-919901-m04" podCIDRs=["10.244.3.0/24"]
	I0812 10:40:51.870064       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m04"
	I0812 10:41:07.861699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-919901-m04"
	I0812 10:42:03.832401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-919901-m04"
	I0812 10:42:03.879343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.03547ms"
	I0812 10:42:03.880483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.255µs"
	
	
	==> kube-proxy [7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f] <==
	I0812 10:37:28.448360       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:37:28.490783       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0812 10:37:28.537171       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:37:28.537271       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:37:28.537290       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:37:28.541575       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:37:28.542279       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:37:28.542307       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:37:28.546922       1 config.go:192] "Starting service config controller"
	I0812 10:37:28.546997       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:37:28.547176       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:37:28.547313       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:37:28.548759       1 config.go:319] "Starting node config controller"
	I0812 10:37:28.548785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:37:28.648203       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 10:37:28.648337       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:37:28.649030       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf] <==
	E0812 10:37:12.736146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0812 10:37:14.999883       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 10:39:45.445909       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6xqjr\": pod kube-proxy-6xqjr is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6xqjr" node="ha-919901-m03"
	E0812 10:39:45.446133       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6xqjr\": pod kube-proxy-6xqjr is already assigned to node \"ha-919901-m03\"" pod="kube-system/kube-proxy-6xqjr"
	I0812 10:39:45.446184       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6xqjr" node="ha-919901-m03"
	E0812 10:39:45.446998       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6v7rs\": pod kindnet-6v7rs is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-6v7rs" node="ha-919901-m03"
	E0812 10:39:45.447058       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 43c3bf93-f498-4ea3-b494-a1f06e64e2d2(kube-system/kindnet-6v7rs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6v7rs"
	E0812 10:39:45.447082       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6v7rs\": pod kindnet-6v7rs is already assigned to node \"ha-919901-m03\"" pod="kube-system/kindnet-6v7rs"
	I0812 10:39:45.447108       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6v7rs" node="ha-919901-m03"
	E0812 10:39:45.561301       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xjhsb\": pod kube-proxy-xjhsb is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xjhsb" node="ha-919901-m03"
	E0812 10:39:45.561578       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b68bad98-fc42-4b06-beac-91bcaef3749c(kube-system/kube-proxy-xjhsb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xjhsb"
	E0812 10:39:45.561672       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xjhsb\": pod kube-proxy-xjhsb is already assigned to node \"ha-919901-m03\"" pod="kube-system/kube-proxy-xjhsb"
	I0812 10:39:45.561699       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xjhsb" node="ha-919901-m03"
	E0812 10:40:14.546495       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v6ddx\": pod busybox-fc5497c4f-v6ddx is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v6ddx" node="ha-919901-m03"
	E0812 10:40:14.546746       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 06fbbe15-dd57-4276-b19d-9c6c7ea2ea44(default/busybox-fc5497c4f-v6ddx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-v6ddx"
	E0812 10:40:14.547178       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v6ddx\": pod busybox-fc5497c4f-v6ddx is already assigned to node \"ha-919901-m03\"" pod="default/busybox-fc5497c4f-v6ddx"
	I0812 10:40:14.547314       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v6ddx" node="ha-919901-m03"
	E0812 10:40:14.584416       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pj8gg\": pod busybox-fc5497c4f-pj8gg is already assigned to node \"ha-919901\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pj8gg" node="ha-919901"
	E0812 10:40:14.584474       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b9a02941-b2f3-4ffe-bdca-07a7322887b1(default/busybox-fc5497c4f-pj8gg) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pj8gg"
	E0812 10:40:14.584494       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pj8gg\": pod busybox-fc5497c4f-pj8gg is already assigned to node \"ha-919901\"" pod="default/busybox-fc5497c4f-pj8gg"
	I0812 10:40:14.584510       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pj8gg" node="ha-919901"
	E0812 10:40:14.594617       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-46rph\": pod busybox-fc5497c4f-46rph is already assigned to node \"ha-919901-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-46rph" node="ha-919901-m02"
	E0812 10:40:14.594677       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1851351d-2c94-43c9-b72e-87f74b2326db(default/busybox-fc5497c4f-46rph) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-46rph"
	E0812 10:40:14.594693       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-46rph\": pod busybox-fc5497c4f-46rph is already assigned to node \"ha-919901-m02\"" pod="default/busybox-fc5497c4f-46rph"
	I0812 10:40:14.594711       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-46rph" node="ha-919901-m02"
	
	
	==> kubelet <==
	Aug 12 10:39:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:39:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: E0812 10:40:14.519691    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:40:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:40:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:40:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:40:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.581605    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=166.581556293 podStartE2EDuration="2m46.581556293s" podCreationTimestamp="2024-08-12 10:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 10:37:45.760893349 +0000 UTC m=+31.457240289" watchObservedRunningTime="2024-08-12 10:40:14.581556293 +0000 UTC m=+180.277903240"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.586171    1369 topology_manager.go:215] "Topology Admit Handler" podUID="b9a02941-b2f3-4ffe-bdca-07a7322887b1" podNamespace="default" podName="busybox-fc5497c4f-pj8gg"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.641285    1369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4htt\" (UniqueName: \"kubernetes.io/projected/b9a02941-b2f3-4ffe-bdca-07a7322887b1-kube-api-access-d4htt\") pod \"busybox-fc5497c4f-pj8gg\" (UID: \"b9a02941-b2f3-4ffe-bdca-07a7322887b1\") " pod="default/busybox-fc5497c4f-pj8gg"
	Aug 12 10:41:14 ha-919901 kubelet[1369]: E0812 10:41:14.517575    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:41:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:42:14 ha-919901 kubelet[1369]: E0812 10:42:14.517152    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:42:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:43:14 ha-919901 kubelet[1369]: E0812 10:43:14.515710    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:43:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-919901 -n ha-919901
helpers_test.go:261: (dbg) Run:  kubectl --context ha-919901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (3.193554631s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:43:53.813159   27052 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:43:53.813396   27052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:53.813404   27052 out.go:304] Setting ErrFile to fd 2...
	I0812 10:43:53.813409   27052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:53.813600   27052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:43:53.813758   27052 out.go:298] Setting JSON to false
	I0812 10:43:53.813777   27052 mustload.go:65] Loading cluster: ha-919901
	I0812 10:43:53.813875   27052 notify.go:220] Checking for updates...
	I0812 10:43:53.814134   27052 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:43:53.814147   27052 status.go:255] checking status of ha-919901 ...
	I0812 10:43:53.814533   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:53.814589   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:53.833882   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0812 10:43:53.834437   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:53.835067   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:53.835092   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:53.835471   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:53.835659   27052 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:43:53.837452   27052 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:43:53.837468   27052 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:53.837775   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:53.837821   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:53.852880   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40639
	I0812 10:43:53.853271   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:53.853790   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:53.853812   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:53.854154   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:53.854334   27052 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:43:53.856941   27052 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:53.857380   27052 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:53.857407   27052 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:53.857601   27052 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:53.857880   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:53.857929   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:53.873634   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I0812 10:43:53.874049   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:53.874495   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:53.874514   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:53.874889   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:53.875054   27052 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:43:53.875279   27052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:53.875315   27052 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:43:53.878339   27052 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:53.878912   27052 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:53.878939   27052 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:53.879097   27052 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:43:53.879271   27052 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:43:53.879441   27052 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:43:53.879600   27052 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:43:53.964149   27052 ssh_runner.go:195] Run: systemctl --version
	I0812 10:43:53.970168   27052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:53.986714   27052 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:43:53.986742   27052 api_server.go:166] Checking apiserver status ...
	I0812 10:43:53.986773   27052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:43:54.001091   27052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:43:54.018667   27052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:43:54.018714   27052 ssh_runner.go:195] Run: ls
	I0812 10:43:54.023793   27052 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:43:54.027865   27052 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:43:54.027890   27052 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:43:54.027899   27052 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:43:54.027915   27052 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:43:54.028214   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:54.028249   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:54.043101   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0812 10:43:54.043488   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:54.043989   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:54.044009   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:54.044333   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:54.044566   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:43:54.046397   27052 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:43:54.046416   27052 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:54.046718   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:54.046751   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:54.061800   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0812 10:43:54.062255   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:54.062783   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:54.062809   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:54.063103   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:54.063305   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:43:54.066552   27052 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:54.067132   27052 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:54.067161   27052 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:54.067410   27052 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:54.067742   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:54.067780   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:54.083294   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37349
	I0812 10:43:54.083706   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:54.084131   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:54.084151   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:54.084536   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:54.084745   27052 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:43:54.084952   27052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:54.084972   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:43:54.087744   27052 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:54.088186   27052 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:54.088203   27052 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:54.088438   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:43:54.088632   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:43:54.088796   27052 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:43:54.088967   27052 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:43:56.609240   27052 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:43:56.609324   27052 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:43:56.609338   27052 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:43:56.609346   27052 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:43:56.609367   27052 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:43:56.609374   27052 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:43:56.609695   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.609735   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.625092   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0812 10:43:56.625522   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.626001   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.626028   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.626293   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.626466   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:43:56.628169   27052 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:43:56.628186   27052 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:43:56.628549   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.628619   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.643258   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0812 10:43:56.643673   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.644123   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.644142   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.644479   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.644677   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:43:56.647698   27052 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:56.648124   27052 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:43:56.648151   27052 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:56.648281   27052 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:43:56.648578   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.648612   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.663813   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0812 10:43:56.664277   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.664754   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.664775   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.665094   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.665280   27052 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:43:56.665498   27052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:56.665518   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:43:56.668477   27052 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:56.668968   27052 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:43:56.668991   27052 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:43:56.669099   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:43:56.669276   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:43:56.669417   27052 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:43:56.669551   27052 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:43:56.752028   27052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:56.769556   27052 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:43:56.769591   27052 api_server.go:166] Checking apiserver status ...
	I0812 10:43:56.769651   27052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:43:56.784722   27052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:43:56.795368   27052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:43:56.795456   27052 ssh_runner.go:195] Run: ls
	I0812 10:43:56.799967   27052 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:43:56.804210   27052 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:43:56.804232   27052 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:43:56.804239   27052 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:43:56.804252   27052 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:43:56.804573   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.804611   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.819916   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0812 10:43:56.820313   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.820832   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.820851   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.821155   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.821362   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:43:56.823124   27052 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:43:56.823143   27052 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:43:56.823443   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.823486   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.838675   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I0812 10:43:56.839108   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.839732   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.839755   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.840083   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.840285   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:43:56.843407   27052 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:56.843876   27052 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:43:56.843896   27052 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:56.844141   27052 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:43:56.844445   27052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:56.844482   27052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:56.860689   27052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0812 10:43:56.861124   27052 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:56.861678   27052 main.go:141] libmachine: Using API Version  1
	I0812 10:43:56.861700   27052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:56.862015   27052 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:56.862211   27052 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:43:56.862399   27052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:56.862426   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:43:56.865416   27052 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:56.865876   27052 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:43:56.865900   27052 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:43:56.866164   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:43:56.866359   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:43:56.866565   27052 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:43:56.866721   27052 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:43:56.948032   27052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:56.962039   27052 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (5.187286684s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:43:57.958317   27152 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:43:57.958458   27152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:57.958485   27152 out.go:304] Setting ErrFile to fd 2...
	I0812 10:43:57.958494   27152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:43:57.958755   27152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:43:57.958942   27152 out.go:298] Setting JSON to false
	I0812 10:43:57.958971   27152 mustload.go:65] Loading cluster: ha-919901
	I0812 10:43:57.959087   27152 notify.go:220] Checking for updates...
	I0812 10:43:57.959428   27152 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:43:57.959446   27152 status.go:255] checking status of ha-919901 ...
	I0812 10:43:57.959891   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:57.959973   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:57.977872   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I0812 10:43:57.978374   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:57.979103   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:57.979142   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:57.979511   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:57.979720   27152 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:43:57.981701   27152 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:43:57.981717   27152 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:57.982113   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:57.982154   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:57.996983   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0812 10:43:57.997430   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:57.997947   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:57.997970   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:57.998266   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:57.998438   27152 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:43:58.001265   27152 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:58.001679   27152 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:58.001714   27152 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:58.001799   27152 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:43:58.002096   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:58.002142   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:58.018310   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0812 10:43:58.018804   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:58.019239   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:58.019269   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:58.019730   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:58.019946   27152 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:43:58.020145   27152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:58.020174   27152 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:43:58.023277   27152 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:58.023747   27152 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:43:58.023778   27152 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:43:58.023951   27152 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:43:58.024152   27152 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:43:58.024338   27152 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:43:58.024500   27152 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:43:58.108384   27152 ssh_runner.go:195] Run: systemctl --version
	I0812 10:43:58.114758   27152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:43:58.129903   27152 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:43:58.129932   27152 api_server.go:166] Checking apiserver status ...
	I0812 10:43:58.129969   27152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:43:58.143546   27152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:43:58.153358   27152 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:43:58.153440   27152 ssh_runner.go:195] Run: ls
	I0812 10:43:58.157659   27152 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:43:58.162136   27152 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:43:58.162158   27152 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:43:58.162167   27152 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:43:58.162184   27152 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:43:58.162480   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:58.162518   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:58.177441   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0812 10:43:58.177895   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:58.178388   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:58.178412   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:58.178772   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:58.179045   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:43:58.181037   27152 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:43:58.181053   27152 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:58.181350   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:58.181403   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:58.196325   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0812 10:43:58.196748   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:58.197292   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:58.197333   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:58.197664   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:58.197894   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:43:58.201196   27152 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:58.201720   27152 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:58.201749   27152 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:58.201916   27152 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:43:58.202279   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:43:58.202327   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:43:58.217463   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0812 10:43:58.218066   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:43:58.218744   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:43:58.218768   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:43:58.219105   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:43:58.219312   27152 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:43:58.219521   27152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:43:58.219543   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:43:58.222683   27152 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:58.223216   27152 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:43:58.223240   27152 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:43:58.223411   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:43:58.223609   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:43:58.223787   27152 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:43:58.223917   27152 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:43:59.677223   27152 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:43:59.677284   27152 retry.go:31] will retry after 318.127372ms: dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:02.749274   27152 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:02.749369   27152 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:44:02.749392   27152 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:02.749403   27152 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:44:02.749436   27152 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:02.749450   27152 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:02.749894   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.749941   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:02.765343   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0812 10:44:02.765838   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:02.766312   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:02.766333   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:02.766630   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:02.766795   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:02.768305   27152 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:02.768325   27152 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:02.768622   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.768663   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:02.784679   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0812 10:44:02.785118   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:02.785579   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:02.785593   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:02.785934   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:02.786164   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:02.789247   27152 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:02.789653   27152 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:02.789678   27152 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:02.789823   27152 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:02.790186   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.790233   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:02.804980   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0812 10:44:02.805423   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:02.805865   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:02.805892   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:02.806207   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:02.806409   27152 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:02.806585   27152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:02.806606   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:02.809407   27152 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:02.809887   27152 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:02.809918   27152 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:02.810046   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:02.810241   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:02.810403   27152 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:02.810592   27152 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:02.897018   27152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:02.913128   27152 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:02.913164   27152 api_server.go:166] Checking apiserver status ...
	I0812 10:44:02.913213   27152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:02.927404   27152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:02.941209   27152 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:02.941272   27152 ssh_runner.go:195] Run: ls
	I0812 10:44:02.945797   27152 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:02.949952   27152 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:02.949976   27152 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:02.949985   27152 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:02.950000   27152 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:02.950337   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.950377   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:02.965597   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I0812 10:44:02.966037   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:02.966505   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:02.966527   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:02.966839   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:02.967062   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:02.968502   27152 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:02.968516   27152 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:02.968873   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.968926   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:02.984500   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0812 10:44:02.984934   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:02.985393   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:02.985415   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:02.985708   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:02.985933   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:02.989086   27152 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:02.989535   27152 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:02.989733   27152 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:02.989733   27152 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:02.990124   27152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:02.990187   27152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:03.005672   27152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0812 10:44:03.006127   27152 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:03.006577   27152 main.go:141] libmachine: Using API Version  1
	I0812 10:44:03.006596   27152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:03.006893   27152 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:03.007058   27152 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:03.007230   27152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:03.007251   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:03.010357   27152 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:03.010858   27152 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:03.010885   27152 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:03.011027   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:03.011217   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:03.011425   27152 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:03.011629   27152 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:03.087797   27152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:03.102734   27152 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (4.818068341s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:04.819053   27252 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:04.819173   27252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:04.819184   27252 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:04.819190   27252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:04.819413   27252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:04.819576   27252 out.go:298] Setting JSON to false
	I0812 10:44:04.819598   27252 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:04.819644   27252 notify.go:220] Checking for updates...
	I0812 10:44:04.819996   27252 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:04.820018   27252 status.go:255] checking status of ha-919901 ...
	I0812 10:44:04.820468   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:04.820535   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:04.838852   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0812 10:44:04.839373   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:04.840013   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:04.840040   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:04.840375   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:04.840601   27252 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:04.842307   27252 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:04.842321   27252 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:04.842607   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:04.842638   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:04.858430   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42253
	I0812 10:44:04.858888   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:04.859345   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:04.859365   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:04.859700   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:04.859902   27252 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:04.863031   27252 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:04.863432   27252 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:04.863454   27252 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:04.863619   27252 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:04.864047   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:04.864089   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:04.879233   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0812 10:44:04.879713   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:04.880253   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:04.880274   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:04.880615   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:04.880821   27252 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:04.881019   27252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:04.881052   27252 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:04.884064   27252 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:04.884574   27252 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:04.884600   27252 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:04.884781   27252 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:04.884991   27252 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:04.885194   27252 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:04.885336   27252 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:04.969215   27252 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:04.975309   27252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:04.991802   27252 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:04.991828   27252 api_server.go:166] Checking apiserver status ...
	I0812 10:44:04.991859   27252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:05.008680   27252 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:05.022624   27252 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:05.022676   27252 ssh_runner.go:195] Run: ls
	I0812 10:44:05.027838   27252 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:05.032393   27252 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:05.032422   27252 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:05.032436   27252 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:05.032459   27252 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:05.032963   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:05.033007   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:05.048188   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43185
	I0812 10:44:05.048631   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:05.049202   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:05.049223   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:05.049579   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:05.049784   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:05.051778   27252 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:44:05.051798   27252 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:05.052192   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:05.052263   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:05.067799   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0812 10:44:05.068219   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:05.068816   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:05.068845   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:05.069195   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:05.069403   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:44:05.072315   27252 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:05.072908   27252 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:05.072939   27252 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:05.073149   27252 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:05.073546   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:05.073589   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:05.089234   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0812 10:44:05.089664   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:05.090218   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:05.090243   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:05.090625   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:05.090838   27252 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:44:05.091038   27252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:05.091071   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:44:05.094025   27252 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:05.094434   27252 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:05.094460   27252 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:05.094643   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:44:05.094800   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:44:05.094942   27252 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:44:05.095057   27252 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:44:05.821160   27252 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:05.821204   27252 retry.go:31] will retry after 346.962251ms: dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:09.245158   27252 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:09.245257   27252 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:44:09.245277   27252 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:09.245326   27252 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:44:09.245363   27252 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:09.245373   27252 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:09.245797   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.245851   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.261177   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0812 10:44:09.261651   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.262145   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.262164   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.262473   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.262676   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:09.264173   27252 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:09.264188   27252 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:09.264514   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.264585   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.280095   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0812 10:44:09.280550   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.281042   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.281060   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.281408   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.281587   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:09.284824   27252 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:09.285335   27252 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:09.285357   27252 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:09.285540   27252 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:09.285830   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.285862   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.301356   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0812 10:44:09.301764   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.302239   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.302260   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.302617   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.302792   27252 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:09.302979   27252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:09.302997   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:09.305928   27252 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:09.306346   27252 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:09.306391   27252 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:09.306507   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:09.306671   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:09.306838   27252 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:09.306967   27252 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:09.392508   27252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:09.409511   27252 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:09.409540   27252 api_server.go:166] Checking apiserver status ...
	I0812 10:44:09.409575   27252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:09.423751   27252 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:09.433399   27252 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:09.433469   27252 ssh_runner.go:195] Run: ls
	I0812 10:44:09.438668   27252 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:09.443278   27252 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:09.443309   27252 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:09.443321   27252 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:09.443338   27252 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:09.443746   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.443788   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.459600   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0812 10:44:09.460016   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.460475   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.460490   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.460791   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.461081   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:09.462849   27252 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:09.462866   27252 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:09.463272   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.463318   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.479105   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33379
	I0812 10:44:09.479579   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.480006   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.480029   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.480384   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.480577   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:09.483133   27252 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:09.483599   27252 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:09.483642   27252 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:09.483771   27252 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:09.484092   27252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:09.484131   27252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:09.499150   27252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0812 10:44:09.499626   27252 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:09.500110   27252 main.go:141] libmachine: Using API Version  1
	I0812 10:44:09.500131   27252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:09.500568   27252 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:09.500793   27252 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:09.501047   27252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:09.501076   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:09.503765   27252 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:09.504166   27252 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:09.504187   27252 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:09.504366   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:09.504535   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:09.504672   27252 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:09.504793   27252 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:09.580993   27252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:09.594635   27252 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (4.771489439s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:11.011962   27352 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:11.012208   27352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:11.012220   27352 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:11.012224   27352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:11.012462   27352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:11.012667   27352 out.go:298] Setting JSON to false
	I0812 10:44:11.012690   27352 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:11.012750   27352 notify.go:220] Checking for updates...
	I0812 10:44:11.013150   27352 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:11.013167   27352 status.go:255] checking status of ha-919901 ...
	I0812 10:44:11.013552   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.013630   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.032522   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0812 10:44:11.033130   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.033745   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.033771   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.034182   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.034409   27352 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:11.036152   27352 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:11.036166   27352 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:11.036446   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.036483   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.052719   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0812 10:44:11.053120   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.053598   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.053620   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.053926   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.054142   27352 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:11.057365   27352 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:11.057828   27352 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:11.057863   27352 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:11.058045   27352 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:11.058466   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.058552   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.073516   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0812 10:44:11.073983   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.074547   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.074564   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.074884   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.075103   27352 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:11.075337   27352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:11.075384   27352 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:11.078080   27352 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:11.078532   27352 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:11.078563   27352 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:11.078755   27352 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:11.079055   27352 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:11.079239   27352 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:11.079391   27352 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:11.161435   27352 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:11.167962   27352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:11.184083   27352 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:11.184112   27352 api_server.go:166] Checking apiserver status ...
	I0812 10:44:11.184159   27352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:11.200671   27352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:11.210242   27352 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:11.210296   27352 ssh_runner.go:195] Run: ls
	I0812 10:44:11.216014   27352 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:11.222003   27352 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:11.222029   27352 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:11.222040   27352 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:11.222069   27352 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:11.222413   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.222449   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.238197   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34307
	I0812 10:44:11.238600   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.239064   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.239086   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.239460   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.239816   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:11.241299   27352 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:44:11.241319   27352 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:11.241590   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.241621   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.256720   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0812 10:44:11.257219   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.257766   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.257810   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.258183   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.258402   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:44:11.261673   27352 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:11.262053   27352 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:11.262083   27352 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:11.262276   27352 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:11.262619   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:11.262659   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:11.277841   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0812 10:44:11.278284   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:11.278771   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:11.278797   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:11.279124   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:11.279300   27352 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:44:11.279494   27352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:11.279515   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:44:11.282569   27352 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:11.283050   27352 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:11.283075   27352 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:11.283225   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:44:11.283393   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:44:11.283572   27352 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:44:11.283731   27352 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:44:12.317163   27352 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:12.317221   27352 retry.go:31] will retry after 162.845619ms: dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:15.393163   27352 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:15.393240   27352 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:44:15.393258   27352 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:15.393269   27352 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:44:15.393312   27352 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:15.393323   27352 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:15.393634   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.393684   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.408694   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0812 10:44:15.409156   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.409636   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.409653   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.409956   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.410141   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:15.411735   27352 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:15.411752   27352 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:15.412109   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.412148   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.428633   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0812 10:44:15.429111   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.429632   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.429657   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.430054   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.430280   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:15.433650   27352 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:15.434157   27352 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:15.434193   27352 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:15.434320   27352 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:15.434651   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.434698   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.449812   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0812 10:44:15.450264   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.450749   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.450771   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.451054   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.451224   27352 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:15.451435   27352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:15.451454   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:15.454529   27352 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:15.455029   27352 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:15.455068   27352 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:15.455177   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:15.455351   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:15.455519   27352 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:15.455691   27352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:15.541561   27352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:15.559427   27352 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:15.559458   27352 api_server.go:166] Checking apiserver status ...
	I0812 10:44:15.559498   27352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:15.573719   27352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:15.583427   27352 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:15.583486   27352 ssh_runner.go:195] Run: ls
	I0812 10:44:15.587513   27352 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:15.591866   27352 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:15.591891   27352 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:15.591905   27352 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:15.591919   27352 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:15.592204   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.592235   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.607211   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I0812 10:44:15.607712   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.608194   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.608215   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.608584   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.608811   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:15.610648   27352 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:15.610665   27352 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:15.610965   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.611010   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.626565   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0812 10:44:15.627026   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.627496   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.627526   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.627846   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.628101   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:15.630937   27352 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:15.631373   27352 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:15.631398   27352 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:15.631452   27352 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:15.631766   27352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:15.631815   27352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:15.646800   27352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37071
	I0812 10:44:15.647255   27352 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:15.647789   27352 main.go:141] libmachine: Using API Version  1
	I0812 10:44:15.647813   27352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:15.648176   27352 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:15.648393   27352 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:15.648623   27352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:15.648645   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:15.651348   27352 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:15.651774   27352 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:15.651800   27352 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:15.652000   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:15.652204   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:15.652339   27352 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:15.652471   27352 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:15.728035   27352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:15.742101   27352 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (4.609736538s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:17.598397   27452 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:17.598666   27452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:17.598678   27452 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:17.598682   27452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:17.598850   27452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:17.599037   27452 out.go:298] Setting JSON to false
	I0812 10:44:17.599061   27452 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:17.599185   27452 notify.go:220] Checking for updates...
	I0812 10:44:17.599576   27452 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:17.599598   27452 status.go:255] checking status of ha-919901 ...
	I0812 10:44:17.600088   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.600145   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.619493   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0812 10:44:17.619948   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.620536   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.620562   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.620948   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.621157   27452 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:17.622857   27452 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:17.622873   27452 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:17.623152   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.623184   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.638966   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0812 10:44:17.639380   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.639855   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.639877   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.640155   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.640302   27452 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:17.643039   27452 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:17.643585   27452 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:17.643611   27452 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:17.643793   27452 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:17.644103   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.644140   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.659633   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0812 10:44:17.660086   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.660665   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.660688   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.661218   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.661475   27452 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:17.661695   27452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:17.661721   27452 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:17.664741   27452 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:17.665170   27452 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:17.665198   27452 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:17.665382   27452 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:17.665610   27452 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:17.665841   27452 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:17.666040   27452 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:17.748936   27452 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:17.754938   27452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:17.769628   27452 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:17.769663   27452 api_server.go:166] Checking apiserver status ...
	I0812 10:44:17.769710   27452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:17.785687   27452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:17.795797   27452 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:17.795852   27452 ssh_runner.go:195] Run: ls
	I0812 10:44:17.800724   27452 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:17.807438   27452 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:17.807475   27452 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:17.807489   27452 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:17.807509   27452 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:17.807822   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.807857   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.823264   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0812 10:44:17.823713   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.824226   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.824252   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.824620   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.824842   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:17.826606   27452 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:44:17.826622   27452 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:17.826939   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.826981   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.842638   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I0812 10:44:17.843059   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.843830   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.843876   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.844370   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.844744   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:44:17.848366   27452 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:17.848980   27452 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:17.849026   27452 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:17.849145   27452 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:17.849492   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:17.849530   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:17.864543   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0812 10:44:17.865099   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:17.865618   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:17.865656   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:17.865962   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:17.866122   27452 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:44:17.866342   27452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:17.866369   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:44:17.869243   27452 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:17.869702   27452 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:17.869725   27452 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:17.869873   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:44:17.870078   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:44:17.870217   27452 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:44:17.870345   27452 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:44:18.461077   27452 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:18.461128   27452 retry.go:31] will retry after 284.28612ms: dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:21.821104   27452 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:21.821200   27452 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:44:21.821243   27452 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:21.821257   27452 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:44:21.821275   27452 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:21.821283   27452 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:21.821599   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:21.821640   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:21.836561   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0812 10:44:21.837002   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:21.837509   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:21.837537   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:21.837848   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:21.838058   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:21.839645   27452 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:21.839661   27452 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:21.839943   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:21.839975   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:21.855233   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0812 10:44:21.855714   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:21.856222   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:21.856244   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:21.856558   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:21.856766   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:21.859909   27452 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:21.860434   27452 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:21.860462   27452 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:21.860660   27452 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:21.861060   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:21.861099   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:21.877136   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I0812 10:44:21.877573   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:21.878002   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:21.878023   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:21.878358   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:21.878625   27452 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:21.878883   27452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:21.878904   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:21.881767   27452 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:21.882139   27452 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:21.882156   27452 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:21.882341   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:21.882504   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:21.882700   27452 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:21.882820   27452 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:21.964921   27452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:21.980124   27452 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:21.980171   27452 api_server.go:166] Checking apiserver status ...
	I0812 10:44:21.980216   27452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:21.993688   27452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:22.003311   27452 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:22.003362   27452 ssh_runner.go:195] Run: ls
	I0812 10:44:22.008572   27452 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:22.014021   27452 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:22.014051   27452 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:22.014061   27452 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:22.014079   27452 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:22.014485   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:22.014536   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:22.030074   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0812 10:44:22.030625   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:22.031164   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:22.031191   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:22.031544   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:22.031794   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:22.033501   27452 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:22.033519   27452 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:22.033918   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:22.033968   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:22.049928   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0812 10:44:22.050468   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:22.051012   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:22.051051   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:22.051374   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:22.051570   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:22.054250   27452 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:22.054685   27452 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:22.054725   27452 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:22.054856   27452 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:22.055151   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:22.055205   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:22.070243   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I0812 10:44:22.070666   27452 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:22.071172   27452 main.go:141] libmachine: Using API Version  1
	I0812 10:44:22.071200   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:22.071521   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:22.071720   27452 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:22.071899   27452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:22.071930   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:22.074881   27452 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:22.075474   27452 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:22.075511   27452 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:22.075679   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:22.075848   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:22.075989   27452 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:22.076088   27452 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:22.152380   27452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:22.166318   27452 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (3.746353009s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:25.478654   27570 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:25.478852   27570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:25.478869   27570 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:25.478884   27570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:25.479135   27570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:25.479306   27570 out.go:298] Setting JSON to false
	I0812 10:44:25.479327   27570 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:25.479423   27570 notify.go:220] Checking for updates...
	I0812 10:44:25.479710   27570 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:25.479724   27570 status.go:255] checking status of ha-919901 ...
	I0812 10:44:25.480173   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.480244   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.495557   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0812 10:44:25.496036   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.496552   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.496573   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.497039   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.497253   27570 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:25.498846   27570 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:25.498863   27570 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:25.499230   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.499272   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.514456   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0812 10:44:25.514946   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.515531   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.515553   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.515861   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.516059   27570 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:25.518943   27570 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:25.519384   27570 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:25.519406   27570 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:25.519570   27570 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:25.519885   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.519920   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.538152   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0812 10:44:25.538630   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.539143   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.539165   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.539539   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.539722   27570 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:25.539945   27570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:25.539994   27570 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:25.542902   27570 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:25.543345   27570 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:25.543375   27570 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:25.543609   27570 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:25.543824   27570 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:25.544011   27570 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:25.544154   27570 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:25.633047   27570 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:25.639633   27570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:25.655987   27570 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:25.656019   27570 api_server.go:166] Checking apiserver status ...
	I0812 10:44:25.656063   27570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:25.673120   27570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:25.683933   27570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:25.683999   27570 ssh_runner.go:195] Run: ls
	I0812 10:44:25.689030   27570 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:25.695685   27570 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:25.695740   27570 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:25.695761   27570 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:25.695785   27570 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:25.696134   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.696177   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.711706   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0812 10:44:25.712142   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.712617   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.712633   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.712978   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.713206   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:25.714857   27570 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:44:25.714875   27570 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:25.715216   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.715259   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.730181   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0812 10:44:25.730580   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.731050   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.731072   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.731359   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.731526   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:44:25.734211   27570 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:25.734658   27570 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:25.734699   27570 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:25.734759   27570 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:44:25.735101   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:25.735142   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:25.751192   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0812 10:44:25.751652   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:25.752100   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:25.752125   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:25.752459   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:25.752688   27570 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:44:25.752962   27570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:25.752990   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:44:25.755769   27570 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:25.756271   27570 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:44:25.756299   27570 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:44:25.756477   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:44:25.756674   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:44:25.756800   27570 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:44:25.756950   27570 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	W0812 10:44:28.829118   27570 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.139:22: connect: no route to host
	W0812 10:44:28.829198   27570 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	E0812 10:44:28.829213   27570 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:28.829221   27570 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:44:28.829237   27570 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.139:22: connect: no route to host
	I0812 10:44:28.829244   27570 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:28.829570   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:28.829611   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:28.847024   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0812 10:44:28.847457   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:28.848014   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:28.848040   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:28.848448   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:28.848690   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:28.850471   27570 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:28.850488   27570 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:28.850816   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:28.850858   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:28.865925   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0812 10:44:28.866424   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:28.866945   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:28.866973   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:28.867277   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:28.867544   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:28.870586   27570 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:28.871048   27570 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:28.871081   27570 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:28.871273   27570 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:28.871626   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:28.871675   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:28.887902   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0812 10:44:28.888321   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:28.888829   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:28.888857   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:28.889202   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:28.889423   27570 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:28.889651   27570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:28.889680   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:28.892671   27570 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:28.893116   27570 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:28.893156   27570 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:28.893314   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:28.893539   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:28.893724   27570 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:28.893855   27570 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:28.976428   27570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:28.990970   27570 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:28.991007   27570 api_server.go:166] Checking apiserver status ...
	I0812 10:44:28.991046   27570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:29.005363   27570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:29.015318   27570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:29.015375   27570 ssh_runner.go:195] Run: ls
	I0812 10:44:29.019580   27570 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:29.023996   27570 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:29.024025   27570 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:29.024037   27570 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:29.024062   27570 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:29.024427   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:29.024468   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:29.039457   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0812 10:44:29.039943   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:29.040562   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:29.040583   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:29.040915   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:29.041105   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:29.042615   27570 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:29.042631   27570 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:29.042905   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:29.042938   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:29.060137   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0812 10:44:29.060607   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:29.061153   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:29.061170   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:29.061453   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:29.061626   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:29.064595   27570 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:29.065089   27570 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:29.065128   27570 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:29.065268   27570 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:29.065672   27570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:29.065719   27570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:29.082641   27570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0812 10:44:29.083055   27570 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:29.083586   27570 main.go:141] libmachine: Using API Version  1
	I0812 10:44:29.083615   27570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:29.083933   27570 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:29.084135   27570 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:29.084304   27570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:29.084322   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:29.087354   27570 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:29.087783   27570 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:29.087815   27570 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:29.088014   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:29.088194   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:29.088431   27570 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:29.088583   27570 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:29.168471   27570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:29.182328   27570 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 7 (617.759726ms)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:38.599332   27724 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:38.599463   27724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:38.599473   27724 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:38.599479   27724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:38.599682   27724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:38.599894   27724 out.go:298] Setting JSON to false
	I0812 10:44:38.599919   27724 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:38.600020   27724 notify.go:220] Checking for updates...
	I0812 10:44:38.600329   27724 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:38.600345   27724 status.go:255] checking status of ha-919901 ...
	I0812 10:44:38.600725   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.600809   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.618990   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0812 10:44:38.619428   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.620094   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.620119   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.620498   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.620712   27724 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:38.622549   27724 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:38.622565   27724 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:38.622921   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.622964   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.637696   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0812 10:44:38.638111   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.638528   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.638554   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.638833   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.638993   27724 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:38.642006   27724 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:38.642414   27724 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:38.642445   27724 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:38.642648   27724 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:38.642925   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.642962   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.658188   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I0812 10:44:38.658650   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.659207   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.659224   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.659549   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.659709   27724 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:38.659904   27724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:38.659929   27724 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:38.662459   27724 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:38.662810   27724 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:38.662838   27724 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:38.663014   27724 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:38.663190   27724 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:38.663370   27724 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:38.663515   27724 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:38.744842   27724 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:38.750861   27724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:38.765683   27724 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:38.765711   27724 api_server.go:166] Checking apiserver status ...
	I0812 10:44:38.765751   27724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:38.779497   27724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:38.789713   27724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:38.789775   27724 ssh_runner.go:195] Run: ls
	I0812 10:44:38.794332   27724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:38.799082   27724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:38.799108   27724 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:38.799117   27724 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:38.799133   27724 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:38.799473   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.799508   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.815216   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I0812 10:44:38.815644   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.816085   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.816105   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.816353   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.816517   27724 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:38.818181   27724 status.go:330] ha-919901-m02 host status = "Stopped" (err=<nil>)
	I0812 10:44:38.818194   27724 status.go:343] host is not running, skipping remaining checks
	I0812 10:44:38.818200   27724 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:38.818223   27724 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:38.818505   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.818556   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.834344   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0812 10:44:38.834873   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.835385   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.835408   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.835744   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.835936   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:38.837716   27724 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:38.837734   27724 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:38.838065   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.838105   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.853507   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I0812 10:44:38.853976   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.854467   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.854486   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.854851   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.855007   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:38.857719   27724 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:38.858109   27724 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:38.858145   27724 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:38.858314   27724 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:38.858600   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:38.858640   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:38.874329   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0812 10:44:38.874912   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:38.875493   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:38.875524   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:38.875896   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:38.876082   27724 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:38.876304   27724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:38.876330   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:38.879104   27724 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:38.879722   27724 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:38.879763   27724 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:38.880024   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:38.880234   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:38.880422   27724 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:38.880549   27724 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:38.964845   27724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:38.980293   27724 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:38.980324   27724 api_server.go:166] Checking apiserver status ...
	I0812 10:44:38.980358   27724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:38.994217   27724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:39.007850   27724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:39.007915   27724 ssh_runner.go:195] Run: ls
	I0812 10:44:39.012683   27724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:39.019233   27724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:39.019265   27724 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:39.019277   27724 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:39.019311   27724 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:39.019629   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:39.019668   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:39.035212   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0812 10:44:39.035673   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:39.036161   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:39.036177   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:39.036467   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:39.036621   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:39.038594   27724 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:39.038615   27724 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:39.038913   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:39.038969   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:39.053931   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0812 10:44:39.054399   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:39.054970   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:39.054989   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:39.055343   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:39.055564   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:39.058716   27724 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:39.059158   27724 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:39.059191   27724 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:39.059394   27724 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:39.059672   27724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:39.059708   27724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:39.075400   27724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I0812 10:44:39.075813   27724 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:39.076351   27724 main.go:141] libmachine: Using API Version  1
	I0812 10:44:39.076372   27724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:39.076801   27724 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:39.077119   27724 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:39.077457   27724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:39.077482   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:39.080612   27724 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:39.081065   27724 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:39.081094   27724 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:39.081255   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:39.081420   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:39.081611   27724 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:39.081762   27724 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:39.160207   27724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:39.174458   27724 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 7 (606.426387ms)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-919901-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:49.475031   27827 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:49.475289   27827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:49.475297   27827 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:49.475301   27827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:49.475549   27827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:49.475782   27827 out.go:298] Setting JSON to false
	I0812 10:44:49.475814   27827 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:49.475864   27827 notify.go:220] Checking for updates...
	I0812 10:44:49.476324   27827 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:49.476345   27827 status.go:255] checking status of ha-919901 ...
	I0812 10:44:49.476998   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.477048   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.494924   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0812 10:44:49.495433   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.496172   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.496203   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.496509   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.496728   27827 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:44:49.498380   27827 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:44:49.498396   27827 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:49.498714   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.498754   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.513424   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0812 10:44:49.513906   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.514433   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.514449   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.514794   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.515012   27827 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:44:49.518041   27827 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:49.518471   27827 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:49.518493   27827 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:49.518628   27827 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:44:49.518920   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.518973   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.534059   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0812 10:44:49.534561   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.535120   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.535141   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.535529   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.535733   27827 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:44:49.535918   27827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:49.535938   27827 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:44:49.538657   27827 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:49.539017   27827 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:44:49.539053   27827 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:44:49.539215   27827 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:44:49.539389   27827 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:44:49.539537   27827 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:44:49.539663   27827 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:44:49.624439   27827 ssh_runner.go:195] Run: systemctl --version
	I0812 10:44:49.630947   27827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:49.646814   27827 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:49.646844   27827 api_server.go:166] Checking apiserver status ...
	I0812 10:44:49.646879   27827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:49.661155   27827 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0812 10:44:49.670622   27827 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:49.670696   27827 ssh_runner.go:195] Run: ls
	I0812 10:44:49.674863   27827 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:49.679334   27827 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:49.679378   27827 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:44:49.679395   27827 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:49.679419   27827 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:44:49.679724   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.679767   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.695708   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0812 10:44:49.696124   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.696689   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.696714   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.697090   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.697267   27827 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:44:49.699020   27827 status.go:330] ha-919901-m02 host status = "Stopped" (err=<nil>)
	I0812 10:44:49.699035   27827 status.go:343] host is not running, skipping remaining checks
	I0812 10:44:49.699041   27827 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:49.699070   27827 status.go:255] checking status of ha-919901-m03 ...
	I0812 10:44:49.699381   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.699416   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.714112   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0812 10:44:49.714593   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.715087   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.715112   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.715421   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.715654   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:49.717508   27827 status.go:330] ha-919901-m03 host status = "Running" (err=<nil>)
	I0812 10:44:49.717538   27827 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:49.717877   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.717917   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.733530   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0812 10:44:49.733917   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.734365   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.734388   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.734760   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.734964   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:44:49.737716   27827 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:49.738076   27827 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:49.738102   27827 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:49.738199   27827 host.go:66] Checking if "ha-919901-m03" exists ...
	I0812 10:44:49.738477   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.738515   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.753823   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0812 10:44:49.754195   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.754613   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.754638   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.754917   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.755121   27827 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:49.755321   27827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:49.755343   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:49.758279   27827 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:49.758735   27827 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:49.758762   27827 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:49.759024   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:49.759208   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:49.759361   27827 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:49.759486   27827 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:49.840846   27827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:49.855714   27827 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:44:49.855746   27827 api_server.go:166] Checking apiserver status ...
	I0812 10:44:49.855786   27827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:44:49.869971   27827 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0812 10:44:49.880124   27827 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:44:49.880184   27827 ssh_runner.go:195] Run: ls
	I0812 10:44:49.884973   27827 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:44:49.889749   27827 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:44:49.889782   27827 status.go:422] ha-919901-m03 apiserver status = Running (err=<nil>)
	I0812 10:44:49.889794   27827 status.go:257] ha-919901-m03 status: &{Name:ha-919901-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:44:49.889824   27827 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:44:49.890188   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.890228   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.905335   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0812 10:44:49.905904   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.906408   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.906444   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.906746   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.906930   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:49.908473   27827 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:44:49.908488   27827 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:49.908791   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.908836   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.923442   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0812 10:44:49.923882   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.924382   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.924409   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.924710   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.924891   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:44:49.927491   27827 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:49.927906   27827 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:49.927943   27827 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:49.928047   27827 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:44:49.928440   27827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:49.928488   27827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:49.943294   27827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0812 10:44:49.943766   27827 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:49.944284   27827 main.go:141] libmachine: Using API Version  1
	I0812 10:44:49.944306   27827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:49.944738   27827 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:49.944922   27827 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:49.945104   27827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:44:49.945126   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:49.948157   27827 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:49.948570   27827 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:49.948592   27827 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:49.948804   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:49.949007   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:49.949173   27827 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:49.949343   27827 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:50.024400   27827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:44:50.039199   27827 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-919901 -n ha-919901
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-919901 logs -n 25: (1.446281445s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m03_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m04 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp testdata/cp-test.txt                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m04_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03:/home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m03 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-919901 node stop m02 -v=7                                                     | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-919901 node start m02 -v=7                                                    | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:36:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:36:36.258715   22139 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:36:36.258970   22139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:36.258979   22139 out.go:304] Setting ErrFile to fd 2...
	I0812 10:36:36.258983   22139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:36.259142   22139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:36:36.259711   22139 out.go:298] Setting JSON to false
	I0812 10:36:36.260545   22139 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1137,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:36:36.260611   22139 start.go:139] virtualization: kvm guest
	I0812 10:36:36.262778   22139 out.go:177] * [ha-919901] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:36:36.264060   22139 notify.go:220] Checking for updates...
	I0812 10:36:36.264095   22139 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:36:36.265668   22139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:36:36.267193   22139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:36:36.268817   22139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.270270   22139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:36:36.271475   22139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:36:36.272701   22139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:36:36.308466   22139 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 10:36:36.309854   22139 start.go:297] selected driver: kvm2
	I0812 10:36:36.309872   22139 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:36:36.309883   22139 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:36:36.310563   22139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:36:36.310644   22139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:36:36.326403   22139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:36:36.326467   22139 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:36:36.326691   22139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:36:36.326719   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:36:36.326732   22139 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 10:36:36.326740   22139 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 10:36:36.326793   22139 start.go:340] cluster config:
	{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0812 10:36:36.326886   22139 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:36:36.328810   22139 out.go:177] * Starting "ha-919901" primary control-plane node in "ha-919901" cluster
	I0812 10:36:36.330149   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:36:36.330196   22139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:36:36.330206   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:36:36.330283   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:36:36.330293   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:36:36.330604   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:36:36.330623   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json: {Name:mkdd87194089c92fa3aeaf7fe7c90e067b5812a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:36:36.330763   22139 start.go:360] acquireMachinesLock for ha-919901: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:36:36.330790   22139 start.go:364] duration metric: took 14.602µs to acquireMachinesLock for "ha-919901"
	I0812 10:36:36.330805   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:36:36.330860   22139 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 10:36:36.332733   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:36:36.332909   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:36.332965   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:36.347922   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0812 10:36:36.348426   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:36.349005   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:36:36.349040   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:36.349444   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:36.349666   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:36.349842   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:36.350016   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:36:36.350047   22139 client.go:168] LocalClient.Create starting
	I0812 10:36:36.350084   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:36:36.350130   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:36:36.350156   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:36:36.350223   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:36:36.350250   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:36:36.350269   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:36:36.350299   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:36:36.350312   22139 main.go:141] libmachine: (ha-919901) Calling .PreCreateCheck
	I0812 10:36:36.350680   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:36.351097   22139 main.go:141] libmachine: Creating machine...
	I0812 10:36:36.351112   22139 main.go:141] libmachine: (ha-919901) Calling .Create
	I0812 10:36:36.351258   22139 main.go:141] libmachine: (ha-919901) Creating KVM machine...
	I0812 10:36:36.352740   22139 main.go:141] libmachine: (ha-919901) DBG | found existing default KVM network
	I0812 10:36:36.353576   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.353428   22162 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0812 10:36:36.353636   22139 main.go:141] libmachine: (ha-919901) DBG | created network xml: 
	I0812 10:36:36.353659   22139 main.go:141] libmachine: (ha-919901) DBG | <network>
	I0812 10:36:36.353671   22139 main.go:141] libmachine: (ha-919901) DBG |   <name>mk-ha-919901</name>
	I0812 10:36:36.353692   22139 main.go:141] libmachine: (ha-919901) DBG |   <dns enable='no'/>
	I0812 10:36:36.353707   22139 main.go:141] libmachine: (ha-919901) DBG |   
	I0812 10:36:36.353716   22139 main.go:141] libmachine: (ha-919901) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 10:36:36.353725   22139 main.go:141] libmachine: (ha-919901) DBG |     <dhcp>
	I0812 10:36:36.353735   22139 main.go:141] libmachine: (ha-919901) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 10:36:36.353765   22139 main.go:141] libmachine: (ha-919901) DBG |     </dhcp>
	I0812 10:36:36.353788   22139 main.go:141] libmachine: (ha-919901) DBG |   </ip>
	I0812 10:36:36.353796   22139 main.go:141] libmachine: (ha-919901) DBG |   
	I0812 10:36:36.353804   22139 main.go:141] libmachine: (ha-919901) DBG | </network>
	I0812 10:36:36.353827   22139 main.go:141] libmachine: (ha-919901) DBG | 
	I0812 10:36:36.359300   22139 main.go:141] libmachine: (ha-919901) DBG | trying to create private KVM network mk-ha-919901 192.168.39.0/24...
	I0812 10:36:36.426191   22139 main.go:141] libmachine: (ha-919901) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 ...
	I0812 10:36:36.426222   22139 main.go:141] libmachine: (ha-919901) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:36:36.426233   22139 main.go:141] libmachine: (ha-919901) DBG | private KVM network mk-ha-919901 192.168.39.0/24 created
	I0812 10:36:36.426248   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.426140   22162 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.426285   22139 main.go:141] libmachine: (ha-919901) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:36:36.666261   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.666088   22162 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa...
	I0812 10:36:36.725728   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.725612   22162 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/ha-919901.rawdisk...
	I0812 10:36:36.725762   22139 main.go:141] libmachine: (ha-919901) DBG | Writing magic tar header
	I0812 10:36:36.725777   22139 main.go:141] libmachine: (ha-919901) DBG | Writing SSH key tar header
	I0812 10:36:36.725787   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:36.725738   22162 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 ...
	I0812 10:36:36.725830   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901
	I0812 10:36:36.725902   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901 (perms=drwx------)
	I0812 10:36:36.725926   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:36:36.725937   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:36:36.725949   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:36:36.725976   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:36:36.725986   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:36:36.726005   22139 main.go:141] libmachine: (ha-919901) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:36:36.726019   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:36.726027   22139 main.go:141] libmachine: (ha-919901) Creating domain...
	I0812 10:36:36.726067   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:36:36.726093   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:36:36.726106   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:36:36.726120   22139 main.go:141] libmachine: (ha-919901) DBG | Checking permissions on dir: /home
	I0812 10:36:36.726143   22139 main.go:141] libmachine: (ha-919901) DBG | Skipping /home - not owner
	I0812 10:36:36.727230   22139 main.go:141] libmachine: (ha-919901) define libvirt domain using xml: 
	I0812 10:36:36.727246   22139 main.go:141] libmachine: (ha-919901) <domain type='kvm'>
	I0812 10:36:36.727255   22139 main.go:141] libmachine: (ha-919901)   <name>ha-919901</name>
	I0812 10:36:36.727263   22139 main.go:141] libmachine: (ha-919901)   <memory unit='MiB'>2200</memory>
	I0812 10:36:36.727271   22139 main.go:141] libmachine: (ha-919901)   <vcpu>2</vcpu>
	I0812 10:36:36.727278   22139 main.go:141] libmachine: (ha-919901)   <features>
	I0812 10:36:36.727290   22139 main.go:141] libmachine: (ha-919901)     <acpi/>
	I0812 10:36:36.727300   22139 main.go:141] libmachine: (ha-919901)     <apic/>
	I0812 10:36:36.727309   22139 main.go:141] libmachine: (ha-919901)     <pae/>
	I0812 10:36:36.727333   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727344   22139 main.go:141] libmachine: (ha-919901)   </features>
	I0812 10:36:36.727355   22139 main.go:141] libmachine: (ha-919901)   <cpu mode='host-passthrough'>
	I0812 10:36:36.727364   22139 main.go:141] libmachine: (ha-919901)   
	I0812 10:36:36.727374   22139 main.go:141] libmachine: (ha-919901)   </cpu>
	I0812 10:36:36.727389   22139 main.go:141] libmachine: (ha-919901)   <os>
	I0812 10:36:36.727401   22139 main.go:141] libmachine: (ha-919901)     <type>hvm</type>
	I0812 10:36:36.727418   22139 main.go:141] libmachine: (ha-919901)     <boot dev='cdrom'/>
	I0812 10:36:36.727430   22139 main.go:141] libmachine: (ha-919901)     <boot dev='hd'/>
	I0812 10:36:36.727438   22139 main.go:141] libmachine: (ha-919901)     <bootmenu enable='no'/>
	I0812 10:36:36.727449   22139 main.go:141] libmachine: (ha-919901)   </os>
	I0812 10:36:36.727460   22139 main.go:141] libmachine: (ha-919901)   <devices>
	I0812 10:36:36.727471   22139 main.go:141] libmachine: (ha-919901)     <disk type='file' device='cdrom'>
	I0812 10:36:36.727490   22139 main.go:141] libmachine: (ha-919901)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/boot2docker.iso'/>
	I0812 10:36:36.727503   22139 main.go:141] libmachine: (ha-919901)       <target dev='hdc' bus='scsi'/>
	I0812 10:36:36.727513   22139 main.go:141] libmachine: (ha-919901)       <readonly/>
	I0812 10:36:36.727530   22139 main.go:141] libmachine: (ha-919901)     </disk>
	I0812 10:36:36.727541   22139 main.go:141] libmachine: (ha-919901)     <disk type='file' device='disk'>
	I0812 10:36:36.727560   22139 main.go:141] libmachine: (ha-919901)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:36:36.727580   22139 main.go:141] libmachine: (ha-919901)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/ha-919901.rawdisk'/>
	I0812 10:36:36.727593   22139 main.go:141] libmachine: (ha-919901)       <target dev='hda' bus='virtio'/>
	I0812 10:36:36.727603   22139 main.go:141] libmachine: (ha-919901)     </disk>
	I0812 10:36:36.727614   22139 main.go:141] libmachine: (ha-919901)     <interface type='network'>
	I0812 10:36:36.727626   22139 main.go:141] libmachine: (ha-919901)       <source network='mk-ha-919901'/>
	I0812 10:36:36.727638   22139 main.go:141] libmachine: (ha-919901)       <model type='virtio'/>
	I0812 10:36:36.727653   22139 main.go:141] libmachine: (ha-919901)     </interface>
	I0812 10:36:36.727664   22139 main.go:141] libmachine: (ha-919901)     <interface type='network'>
	I0812 10:36:36.727672   22139 main.go:141] libmachine: (ha-919901)       <source network='default'/>
	I0812 10:36:36.727681   22139 main.go:141] libmachine: (ha-919901)       <model type='virtio'/>
	I0812 10:36:36.727691   22139 main.go:141] libmachine: (ha-919901)     </interface>
	I0812 10:36:36.727700   22139 main.go:141] libmachine: (ha-919901)     <serial type='pty'>
	I0812 10:36:36.727711   22139 main.go:141] libmachine: (ha-919901)       <target port='0'/>
	I0812 10:36:36.727739   22139 main.go:141] libmachine: (ha-919901)     </serial>
	I0812 10:36:36.727760   22139 main.go:141] libmachine: (ha-919901)     <console type='pty'>
	I0812 10:36:36.727781   22139 main.go:141] libmachine: (ha-919901)       <target type='serial' port='0'/>
	I0812 10:36:36.727798   22139 main.go:141] libmachine: (ha-919901)     </console>
	I0812 10:36:36.727814   22139 main.go:141] libmachine: (ha-919901)     <rng model='virtio'>
	I0812 10:36:36.727831   22139 main.go:141] libmachine: (ha-919901)       <backend model='random'>/dev/random</backend>
	I0812 10:36:36.727844   22139 main.go:141] libmachine: (ha-919901)     </rng>
	I0812 10:36:36.727854   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727873   22139 main.go:141] libmachine: (ha-919901)     
	I0812 10:36:36.727884   22139 main.go:141] libmachine: (ha-919901)   </devices>
	I0812 10:36:36.727893   22139 main.go:141] libmachine: (ha-919901) </domain>
	I0812 10:36:36.727908   22139 main.go:141] libmachine: (ha-919901) 
	I0812 10:36:36.732085   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:d2:76:8c in network default
	I0812 10:36:36.732658   22139 main.go:141] libmachine: (ha-919901) Ensuring networks are active...
	I0812 10:36:36.732688   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:36.733512   22139 main.go:141] libmachine: (ha-919901) Ensuring network default is active
	I0812 10:36:36.733869   22139 main.go:141] libmachine: (ha-919901) Ensuring network mk-ha-919901 is active
	I0812 10:36:36.734468   22139 main.go:141] libmachine: (ha-919901) Getting domain xml...
	I0812 10:36:36.735258   22139 main.go:141] libmachine: (ha-919901) Creating domain...
	I0812 10:36:37.938658   22139 main.go:141] libmachine: (ha-919901) Waiting to get IP...
	I0812 10:36:37.939346   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:37.939776   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:37.939884   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:37.939787   22162 retry.go:31] will retry after 213.094827ms: waiting for machine to come up
	I0812 10:36:38.154220   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.154748   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.154779   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.154699   22162 retry.go:31] will retry after 338.084889ms: waiting for machine to come up
	I0812 10:36:38.493947   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.494320   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.494345   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.494285   22162 retry.go:31] will retry after 473.305282ms: waiting for machine to come up
	I0812 10:36:38.968861   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:38.969295   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:38.969328   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:38.969235   22162 retry.go:31] will retry after 564.539174ms: waiting for machine to come up
	I0812 10:36:39.535098   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:39.535570   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:39.535601   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:39.535526   22162 retry.go:31] will retry after 604.149167ms: waiting for machine to come up
	I0812 10:36:40.141250   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:40.141758   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:40.141782   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:40.141715   22162 retry.go:31] will retry after 943.023048ms: waiting for machine to come up
	I0812 10:36:41.085777   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:41.086112   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:41.086142   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:41.086064   22162 retry.go:31] will retry after 774.228398ms: waiting for machine to come up
	I0812 10:36:41.861586   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:41.862193   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:41.862222   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:41.862139   22162 retry.go:31] will retry after 1.205515582s: waiting for machine to come up
	I0812 10:36:43.069629   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:43.070159   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:43.070186   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:43.070112   22162 retry.go:31] will retry after 1.834177894s: waiting for machine to come up
	I0812 10:36:44.907232   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:44.907755   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:44.907777   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:44.907711   22162 retry.go:31] will retry after 1.903930049s: waiting for machine to come up
	I0812 10:36:46.813730   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:46.814253   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:46.814277   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:46.814216   22162 retry.go:31] will retry after 2.852173088s: waiting for machine to come up
	I0812 10:36:49.670605   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:49.671236   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:49.671259   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:49.671167   22162 retry.go:31] will retry after 3.596494825s: waiting for machine to come up
	I0812 10:36:53.270609   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:53.271187   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find current IP address of domain ha-919901 in network mk-ha-919901
	I0812 10:36:53.271212   22139 main.go:141] libmachine: (ha-919901) DBG | I0812 10:36:53.271153   22162 retry.go:31] will retry after 3.244912687s: waiting for machine to come up
	I0812 10:36:56.517582   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.518056   22139 main.go:141] libmachine: (ha-919901) Found IP for machine: 192.168.39.5
	I0812 10:36:56.518072   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has current primary IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.518078   22139 main.go:141] libmachine: (ha-919901) Reserving static IP address...
	I0812 10:36:56.518512   22139 main.go:141] libmachine: (ha-919901) DBG | unable to find host DHCP lease matching {name: "ha-919901", mac: "52:54:00:8b:40:2a", ip: "192.168.39.5"} in network mk-ha-919901
	I0812 10:36:56.598209   22139 main.go:141] libmachine: (ha-919901) DBG | Getting to WaitForSSH function...
	I0812 10:36:56.598245   22139 main.go:141] libmachine: (ha-919901) Reserved static IP address: 192.168.39.5
	I0812 10:36:56.598257   22139 main.go:141] libmachine: (ha-919901) Waiting for SSH to be available...
	I0812 10:36:56.600922   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.601331   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.601360   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.601519   22139 main.go:141] libmachine: (ha-919901) DBG | Using SSH client type: external
	I0812 10:36:56.601532   22139 main.go:141] libmachine: (ha-919901) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa (-rw-------)
	I0812 10:36:56.601557   22139 main.go:141] libmachine: (ha-919901) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:36:56.601582   22139 main.go:141] libmachine: (ha-919901) DBG | About to run SSH command:
	I0812 10:36:56.601595   22139 main.go:141] libmachine: (ha-919901) DBG | exit 0
	I0812 10:36:56.729201   22139 main.go:141] libmachine: (ha-919901) DBG | SSH cmd err, output: <nil>: 
	I0812 10:36:56.729508   22139 main.go:141] libmachine: (ha-919901) KVM machine creation complete!
	I0812 10:36:56.729857   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:56.730394   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:56.730579   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:56.730773   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:36:56.730801   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:36:56.732499   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:36:56.732518   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:36:56.732535   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:36:56.732548   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.735116   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.735464   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.735496   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.735620   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.735833   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.735989   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.736122   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.736287   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.736530   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.736544   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:36:56.844291   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:36:56.844315   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:36:56.844323   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.847109   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.847480   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.847503   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.847673   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.847879   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.848116   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.848257   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.848433   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.848632   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.848647   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:36:56.957579   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:36:56.957674   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:36:56.957688   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:36:56.957698   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:56.957973   22139 buildroot.go:166] provisioning hostname "ha-919901"
	I0812 10:36:56.957999   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:56.958187   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:56.960833   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.961211   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:56.961234   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:56.961442   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:56.961645   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.961800   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:56.961982   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:56.962129   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:56.962296   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:56.962309   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901 && echo "ha-919901" | sudo tee /etc/hostname
	I0812 10:36:57.083078   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:36:57.083102   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.086058   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.086459   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.086480   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.086649   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.086848   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.087030   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.087195   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.087403   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.087611   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.087635   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:36:57.205837   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:36:57.205865   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:36:57.205889   22139 buildroot.go:174] setting up certificates
	I0812 10:36:57.205902   22139 provision.go:84] configureAuth start
	I0812 10:36:57.205914   22139 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:36:57.206217   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:57.209219   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.209615   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.209658   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.209816   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.212139   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.212538   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.212565   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.212696   22139 provision.go:143] copyHostCerts
	I0812 10:36:57.212729   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:36:57.212778   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:36:57.212790   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:36:57.212886   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:36:57.212980   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:36:57.213008   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:36:57.213018   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:36:57.213054   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:36:57.213111   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:36:57.213135   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:36:57.213144   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:36:57.213177   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:36:57.213242   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901 san=[127.0.0.1 192.168.39.5 ha-919901 localhost minikube]
	I0812 10:36:57.317181   22139 provision.go:177] copyRemoteCerts
	I0812 10:36:57.317234   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:36:57.317256   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.320500   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.320853   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.320905   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.321086   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.321283   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.321442   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.321590   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:57.407099   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:36:57.407176   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:36:57.430546   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:36:57.430627   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0812 10:36:57.454395   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:36:57.454483   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:36:57.477911   22139 provision.go:87] duration metric: took 271.996825ms to configureAuth
	I0812 10:36:57.477941   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:36:57.478147   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:36:57.478245   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.481239   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.481781   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.481804   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.482039   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.482240   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.482418   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.482564   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.482780   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.483016   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.483038   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:36:57.756403   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:36:57.756458   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:36:57.756468   22139 main.go:141] libmachine: (ha-919901) Calling .GetURL
	I0812 10:36:57.757779   22139 main.go:141] libmachine: (ha-919901) DBG | Using libvirt version 6000000
	I0812 10:36:57.761295   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.761720   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.761744   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.761945   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:36:57.761958   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:36:57.761977   22139 client.go:171] duration metric: took 21.411907085s to LocalClient.Create
	I0812 10:36:57.761998   22139 start.go:167] duration metric: took 21.411984441s to libmachine.API.Create "ha-919901"
	I0812 10:36:57.762007   22139 start.go:293] postStartSetup for "ha-919901" (driver="kvm2")
	I0812 10:36:57.762016   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:36:57.762028   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:57.762276   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:36:57.762306   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.764595   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.764993   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.765015   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.765146   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.765324   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.765498   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.765659   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:57.851838   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:36:57.856061   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:36:57.856086   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:36:57.856162   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:36:57.856300   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:36:57.856312   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:36:57.856417   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:36:57.865276   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:36:57.888801   22139 start.go:296] duration metric: took 126.783362ms for postStartSetup
	I0812 10:36:57.888852   22139 main.go:141] libmachine: (ha-919901) Calling .GetConfigRaw
	I0812 10:36:57.889571   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:57.892981   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.893467   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.893504   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.893815   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:36:57.894011   22139 start.go:128] duration metric: took 21.563142297s to createHost
	I0812 10:36:57.894045   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:57.896579   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.897009   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:57.897034   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:57.897233   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:57.897463   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.897662   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:57.897864   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:57.898053   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:36:57.898219   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:36:57.898230   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:36:58.009563   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459017.984367599
	
	I0812 10:36:58.009592   22139 fix.go:216] guest clock: 1723459017.984367599
	I0812 10:36:58.009603   22139 fix.go:229] Guest: 2024-08-12 10:36:57.984367599 +0000 UTC Remote: 2024-08-12 10:36:57.89402311 +0000 UTC m=+21.678200750 (delta=90.344489ms)
	I0812 10:36:58.009630   22139 fix.go:200] guest clock delta is within tolerance: 90.344489ms
	I0812 10:36:58.009638   22139 start.go:83] releasing machines lock for "ha-919901", held for 21.678838542s
	I0812 10:36:58.009668   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.009964   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:58.013123   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.013592   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.013620   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.013757   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014381   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014581   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:36:58.014672   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:36:58.014709   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:58.014810   22139 ssh_runner.go:195] Run: cat /version.json
	I0812 10:36:58.014830   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:36:58.017738   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.017947   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018233   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.018256   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018309   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:58.018329   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:58.018463   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:58.018594   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:36:58.018678   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:58.018771   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:36:58.018790   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:58.018887   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:36:58.018945   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:58.019043   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:36:58.134918   22139 ssh_runner.go:195] Run: systemctl --version
	I0812 10:36:58.141016   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:36:58.306900   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:36:58.313419   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:36:58.313479   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:36:58.329408   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:36:58.329438   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:36:58.329504   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:36:58.348891   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:36:58.363551   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:36:58.363610   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:36:58.377888   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:36:58.391991   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:36:58.516125   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:36:58.678304   22139 docker.go:233] disabling docker service ...
	I0812 10:36:58.678383   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:36:58.692246   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:36:58.704725   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:36:58.816659   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:36:58.933414   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:36:58.947832   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:36:58.966113   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:36:58.966174   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.976967   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:36:58.977042   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.988239   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:58.999792   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.010341   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:36:59.022445   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.034253   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.052423   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:36:59.064051   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:36:59.073678   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:36:59.073744   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:36:59.087397   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:36:59.097682   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:36:59.210522   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:36:59.347232   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:36:59.347310   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:36:59.352076   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:36:59.352150   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:36:59.356036   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:36:59.393047   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:36:59.393122   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:36:59.421037   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:36:59.451603   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:36:59.452978   22139 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:36:59.456259   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:59.456659   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:36:59.456681   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:36:59.457018   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:36:59.461511   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:36:59.473961   22139 kubeadm.go:883] updating cluster {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:36:59.474097   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:36:59.474155   22139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:36:59.506010   22139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 10:36:59.506074   22139 ssh_runner.go:195] Run: which lz4
	I0812 10:36:59.510208   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0812 10:36:59.510329   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 10:36:59.514484   22139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 10:36:59.514518   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 10:37:00.770263   22139 crio.go:462] duration metric: took 1.259980161s to copy over tarball
	I0812 10:37:00.770361   22139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 10:37:02.903214   22139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132827142s)
	I0812 10:37:02.903246   22139 crio.go:469] duration metric: took 2.132947707s to extract the tarball
	I0812 10:37:02.903255   22139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 10:37:02.940359   22139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:37:02.987236   22139 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:37:02.987259   22139 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:37:02.987267   22139 kubeadm.go:934] updating node { 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0812 10:37:02.987357   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:37:02.987431   22139 ssh_runner.go:195] Run: crio config
	I0812 10:37:03.030874   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:37:03.030898   22139 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 10:37:03.030908   22139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:37:03.030928   22139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-919901 NodeName:ha-919901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:37:03.031049   22139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-919901"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:37:03.031070   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:37:03.031114   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:37:03.048350   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:37:03.048469   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:37:03.048523   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:03.058393   22139 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:37:03.058467   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 10:37:03.067759   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0812 10:37:03.085108   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:37:03.101314   22139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0812 10:37:03.117869   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0812 10:37:03.134602   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:37:03.138466   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:37:03.150761   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:37:03.279305   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:37:03.296808   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.5
	I0812 10:37:03.296836   22139 certs.go:194] generating shared ca certs ...
	I0812 10:37:03.296857   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.297052   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:37:03.297122   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:37:03.297136   22139 certs.go:256] generating profile certs ...
	I0812 10:37:03.297202   22139 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:37:03.297221   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt with IP's: []
	I0812 10:37:03.435567   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt ...
	I0812 10:37:03.435593   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt: {Name:mkf76e1a58a19a83271906e0f2205d004df4fb05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.435765   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key ...
	I0812 10:37:03.435777   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key: {Name:mk683136baf4eed8ba89411e31352ad328795fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.435852   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e
	I0812 10:37:03.435867   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.254]
	I0812 10:37:03.610013   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e ...
	I0812 10:37:03.610042   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e: {Name:mk5995f26b966ef3bce995ce8597f3a2b6f2a70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.610208   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e ...
	I0812 10:37:03.610221   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e: {Name:mk1f9b400bd5620d6f41206bd125d9617c3b8ae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.610285   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e53dde7e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:37:03.610374   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e53dde7e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:37:03.610428   22139 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:37:03.610443   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt with IP's: []
	I0812 10:37:03.858769   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt ...
	I0812 10:37:03.858798   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt: {Name:mkd64192a1dbaf3f8110409ad2ff7466f51e63ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.858946   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key ...
	I0812 10:37:03.858964   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key: {Name:mk2850f76409b91e271b83360aab16a8d76d22e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:03.859054   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:37:03.859071   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:37:03.859081   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:37:03.859094   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:37:03.859107   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:37:03.859120   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:37:03.859133   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:37:03.859145   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:37:03.859193   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:37:03.859225   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:37:03.859234   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:37:03.859256   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:37:03.859277   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:37:03.859298   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:37:03.859334   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:03.859368   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:03.859381   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:37:03.859393   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:37:03.859961   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:37:03.885330   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:37:03.908980   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:37:03.932653   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:37:03.957691   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 10:37:03.980958   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:37:04.004552   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:37:04.028960   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:37:04.052754   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:37:04.079489   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:37:04.118871   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:37:04.151137   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:37:04.168140   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:37:04.174043   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:37:04.184674   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.189753   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.189813   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:04.196068   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:37:04.206640   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:37:04.217384   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.222219   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.222283   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:37:04.227981   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:37:04.238698   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:37:04.249626   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.254061   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.254128   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:37:04.259663   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:37:04.270902   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:37:04.275889   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:37:04.275949   22139 kubeadm.go:392] StartCluster: {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:37:04.276053   22139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:37:04.276130   22139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:37:04.318376   22139 cri.go:89] found id: ""
	I0812 10:37:04.318457   22139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 10:37:04.329217   22139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 10:37:04.339184   22139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 10:37:04.348640   22139 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 10:37:04.348661   22139 kubeadm.go:157] found existing configuration files:
	
	I0812 10:37:04.348703   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 10:37:04.357819   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 10:37:04.357887   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 10:37:04.368911   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 10:37:04.378409   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 10:37:04.378472   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 10:37:04.389662   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 10:37:04.400599   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 10:37:04.400672   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 10:37:04.412426   22139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 10:37:04.423193   22139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 10:37:04.423254   22139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 10:37:04.434581   22139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 10:37:04.556756   22139 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 10:37:04.556847   22139 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 10:37:04.679286   22139 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 10:37:04.679392   22139 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 10:37:04.679501   22139 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 10:37:04.883377   22139 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 10:37:05.013776   22139 out.go:204]   - Generating certificates and keys ...
	I0812 10:37:05.013892   22139 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 10:37:05.013999   22139 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 10:37:05.021650   22139 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 10:37:05.105693   22139 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 10:37:05.204662   22139 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 10:37:05.472479   22139 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 10:37:05.625833   22139 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 10:37:05.625971   22139 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-919901 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0812 10:37:05.895297   22139 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 10:37:05.895485   22139 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-919901 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0812 10:37:05.956929   22139 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 10:37:06.216059   22139 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 10:37:06.259832   22139 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 10:37:06.259922   22139 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 10:37:06.373511   22139 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 10:37:06.490156   22139 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 10:37:06.604171   22139 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 10:37:06.669583   22139 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 10:37:06.788499   22139 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 10:37:06.789058   22139 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 10:37:06.791857   22139 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 10:37:06.793956   22139 out.go:204]   - Booting up control plane ...
	I0812 10:37:06.794048   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 10:37:06.794129   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 10:37:06.794218   22139 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 10:37:06.812534   22139 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 10:37:06.813476   22139 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 10:37:06.813534   22139 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 10:37:06.943625   22139 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 10:37:06.943703   22139 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 10:37:07.444184   22139 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.948018ms
	I0812 10:37:07.444267   22139 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 10:37:13.481049   22139 kubeadm.go:310] [api-check] The API server is healthy after 6.040247289s
	I0812 10:37:13.499700   22139 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 10:37:13.517044   22139 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 10:37:14.047469   22139 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 10:37:14.047716   22139 kubeadm.go:310] [mark-control-plane] Marking the node ha-919901 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 10:37:14.065443   22139 kubeadm.go:310] [bootstrap-token] Using token: ddr49h.zjklblvn621csm71
	I0812 10:37:14.067339   22139 out.go:204]   - Configuring RBAC rules ...
	I0812 10:37:14.067502   22139 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 10:37:14.073047   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 10:37:14.084914   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 10:37:14.088276   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 10:37:14.091458   22139 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 10:37:14.095141   22139 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 10:37:14.114360   22139 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 10:37:14.374796   22139 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 10:37:14.890413   22139 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 10:37:14.891467   22139 kubeadm.go:310] 
	I0812 10:37:14.891542   22139 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 10:37:14.891564   22139 kubeadm.go:310] 
	I0812 10:37:14.891700   22139 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 10:37:14.891736   22139 kubeadm.go:310] 
	I0812 10:37:14.891797   22139 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 10:37:14.891874   22139 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 10:37:14.891948   22139 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 10:37:14.891957   22139 kubeadm.go:310] 
	I0812 10:37:14.892030   22139 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 10:37:14.892040   22139 kubeadm.go:310] 
	I0812 10:37:14.892134   22139 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 10:37:14.892152   22139 kubeadm.go:310] 
	I0812 10:37:14.892216   22139 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 10:37:14.892329   22139 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 10:37:14.892420   22139 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 10:37:14.892431   22139 kubeadm.go:310] 
	I0812 10:37:14.892550   22139 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 10:37:14.892651   22139 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 10:37:14.892665   22139 kubeadm.go:310] 
	I0812 10:37:14.892775   22139 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ddr49h.zjklblvn621csm71 \
	I0812 10:37:14.892950   22139 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 10:37:14.893004   22139 kubeadm.go:310] 	--control-plane 
	I0812 10:37:14.893015   22139 kubeadm.go:310] 
	I0812 10:37:14.893124   22139 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 10:37:14.893138   22139 kubeadm.go:310] 
	I0812 10:37:14.893235   22139 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ddr49h.zjklblvn621csm71 \
	I0812 10:37:14.893394   22139 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 10:37:14.893546   22139 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 10:37:14.893559   22139 cni.go:84] Creating CNI manager for ""
	I0812 10:37:14.893565   22139 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 10:37:14.895475   22139 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 10:37:14.896683   22139 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0812 10:37:14.902178   22139 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 10:37:14.902201   22139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0812 10:37:14.925710   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 10:37:15.282032   22139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 10:37:15.282131   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:15.282152   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901 minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=true
	I0812 10:37:15.412386   22139 ops.go:34] apiserver oom_adj: -16
	I0812 10:37:15.412591   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:15.913317   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:16.412669   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:16.913172   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:17.413013   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:17.912853   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:18.413497   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:18.913669   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:19.412998   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:19.912734   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:20.413186   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:20.912731   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:21.413508   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:21.912784   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:22.412763   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:22.912882   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:23.413080   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:23.913390   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:24.412716   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:24.912693   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:25.413011   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:25.913281   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:26.413171   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:26.913156   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:27.413463   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 10:37:27.536903   22139 kubeadm.go:1113] duration metric: took 12.254848272s to wait for elevateKubeSystemPrivileges
	I0812 10:37:27.536936   22139 kubeadm.go:394] duration metric: took 23.260991872s to StartCluster
	I0812 10:37:27.536952   22139 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:27.537021   22139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:37:27.537714   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:27.537921   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 10:37:27.537956   22139 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 10:37:27.538027   22139 addons.go:69] Setting storage-provisioner=true in profile "ha-919901"
	I0812 10:37:27.537919   22139 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:37:27.538053   22139 addons.go:234] Setting addon storage-provisioner=true in "ha-919901"
	I0812 10:37:27.538056   22139 addons.go:69] Setting default-storageclass=true in profile "ha-919901"
	I0812 10:37:27.538059   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:37:27.538085   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:27.538092   22139 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-919901"
	I0812 10:37:27.538167   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:27.538571   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.538620   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.538697   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.538732   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.554125   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0812 10:37:27.554664   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.554705   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0812 10:37:27.555147   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.555289   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.555314   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.555721   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.555852   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.555882   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.556207   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.556355   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.556388   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.556395   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.558679   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:37:27.559028   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 10:37:27.559599   22139 cert_rotation.go:137] Starting client certificate rotation controller
	I0812 10:37:27.559825   22139 addons.go:234] Setting addon default-storageclass=true in "ha-919901"
	I0812 10:37:27.559875   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:27.560245   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.560292   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.573229   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0812 10:37:27.573754   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.574430   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.574461   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.574872   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.575124   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.577006   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:27.577086   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0812 10:37:27.577571   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.578060   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.578076   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.578355   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.578907   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:27.578941   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:27.579888   22139 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 10:37:27.581264   22139 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:37:27.581282   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 10:37:27.581303   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:27.584447   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.584821   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:27.584854   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.585047   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:27.585276   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:27.585492   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:27.585657   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:27.595948   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0812 10:37:27.596502   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:27.597079   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:27.597108   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:27.597438   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:27.597659   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:27.599596   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:27.599875   22139 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 10:37:27.599893   22139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 10:37:27.599911   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:27.602660   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.603052   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:27.603086   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:27.603293   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:27.603524   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:27.603704   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:27.603863   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:27.699239   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 10:37:27.766958   22139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:37:27.794871   22139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:37:28.120336   22139 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 10:37:28.446833   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.446863   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.446912   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.446934   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447181   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447207   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447250   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447269   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447282   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.447281   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447290   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447255   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.447336   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.447217   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447497   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447522   22139 main.go:141] libmachine: (ha-919901) DBG | Closing plugin on server side
	I0812 10:37:28.447589   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447602   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447608   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.447617   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.447745   22139 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0812 10:37:28.447755   22139 round_trippers.go:469] Request Headers:
	I0812 10:37:28.447768   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:37:28.447775   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:37:28.464847   22139 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0812 10:37:28.465671   22139 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0812 10:37:28.465690   22139 round_trippers.go:469] Request Headers:
	I0812 10:37:28.465701   22139 round_trippers.go:473]     Content-Type: application/json
	I0812 10:37:28.465706   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:37:28.465710   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:37:28.470773   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:37:28.470973   22139 main.go:141] libmachine: Making call to close driver server
	I0812 10:37:28.470990   22139 main.go:141] libmachine: (ha-919901) Calling .Close
	I0812 10:37:28.471298   22139 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:37:28.471318   22139 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:37:28.474411   22139 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 10:37:28.476114   22139 addons.go:510] duration metric: took 938.14967ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 10:37:28.476160   22139 start.go:246] waiting for cluster config update ...
	I0812 10:37:28.476175   22139 start.go:255] writing updated cluster config ...
	I0812 10:37:28.478101   22139 out.go:177] 
	I0812 10:37:28.480226   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:28.480324   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:28.482014   22139 out.go:177] * Starting "ha-919901-m02" control-plane node in "ha-919901" cluster
	I0812 10:37:28.483796   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:37:28.483826   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:37:28.483927   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:37:28.483941   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:37:28.484038   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:28.484245   22139 start.go:360] acquireMachinesLock for ha-919901-m02: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:37:28.484302   22139 start.go:364] duration metric: took 34.303µs to acquireMachinesLock for "ha-919901-m02"
	I0812 10:37:28.484323   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:37:28.484418   22139 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0812 10:37:28.486110   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:37:28.486219   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:28.486252   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:28.502135   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0812 10:37:28.502628   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:28.503153   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:28.503182   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:28.503527   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:28.503746   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:28.503940   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:28.504112   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:37:28.504140   22139 client.go:168] LocalClient.Create starting
	I0812 10:37:28.504181   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:37:28.504231   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:37:28.504247   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:37:28.504322   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:37:28.504346   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:37:28.504358   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:37:28.504378   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:37:28.504389   22139 main.go:141] libmachine: (ha-919901-m02) Calling .PreCreateCheck
	I0812 10:37:28.504581   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:28.505092   22139 main.go:141] libmachine: Creating machine...
	I0812 10:37:28.505108   22139 main.go:141] libmachine: (ha-919901-m02) Calling .Create
	I0812 10:37:28.505273   22139 main.go:141] libmachine: (ha-919901-m02) Creating KVM machine...
	I0812 10:37:28.506878   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found existing default KVM network
	I0812 10:37:28.507019   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found existing private KVM network mk-ha-919901
	I0812 10:37:28.507170   22139 main.go:141] libmachine: (ha-919901-m02) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 ...
	I0812 10:37:28.507196   22139 main.go:141] libmachine: (ha-919901-m02) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:37:28.507246   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.507159   22539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:37:28.507385   22139 main.go:141] libmachine: (ha-919901-m02) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:37:28.781097   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.780972   22539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa...
	I0812 10:37:28.910232   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.910067   22539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/ha-919901-m02.rawdisk...
	I0812 10:37:28.910270   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Writing magic tar header
	I0812 10:37:28.910285   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Writing SSH key tar header
	I0812 10:37:28.910296   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:28.910186   22539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 ...
	I0812 10:37:28.910312   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02
	I0812 10:37:28.910331   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:37:28.910351   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:37:28.910368   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02 (perms=drwx------)
	I0812 10:37:28.910381   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:37:28.910398   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:37:28.910410   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:37:28.910425   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:37:28.910439   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:37:28.910462   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Checking permissions on dir: /home
	I0812 10:37:28.910478   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Skipping /home - not owner
	I0812 10:37:28.910490   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:37:28.910508   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:37:28.910523   22139 main.go:141] libmachine: (ha-919901-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:37:28.910536   22139 main.go:141] libmachine: (ha-919901-m02) Creating domain...
	I0812 10:37:28.911452   22139 main.go:141] libmachine: (ha-919901-m02) define libvirt domain using xml: 
	I0812 10:37:28.911483   22139 main.go:141] libmachine: (ha-919901-m02) <domain type='kvm'>
	I0812 10:37:28.911495   22139 main.go:141] libmachine: (ha-919901-m02)   <name>ha-919901-m02</name>
	I0812 10:37:28.911506   22139 main.go:141] libmachine: (ha-919901-m02)   <memory unit='MiB'>2200</memory>
	I0812 10:37:28.911543   22139 main.go:141] libmachine: (ha-919901-m02)   <vcpu>2</vcpu>
	I0812 10:37:28.911565   22139 main.go:141] libmachine: (ha-919901-m02)   <features>
	I0812 10:37:28.911576   22139 main.go:141] libmachine: (ha-919901-m02)     <acpi/>
	I0812 10:37:28.911587   22139 main.go:141] libmachine: (ha-919901-m02)     <apic/>
	I0812 10:37:28.911600   22139 main.go:141] libmachine: (ha-919901-m02)     <pae/>
	I0812 10:37:28.911607   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.911617   22139 main.go:141] libmachine: (ha-919901-m02)   </features>
	I0812 10:37:28.911629   22139 main.go:141] libmachine: (ha-919901-m02)   <cpu mode='host-passthrough'>
	I0812 10:37:28.911640   22139 main.go:141] libmachine: (ha-919901-m02)   
	I0812 10:37:28.911648   22139 main.go:141] libmachine: (ha-919901-m02)   </cpu>
	I0812 10:37:28.911660   22139 main.go:141] libmachine: (ha-919901-m02)   <os>
	I0812 10:37:28.911671   22139 main.go:141] libmachine: (ha-919901-m02)     <type>hvm</type>
	I0812 10:37:28.911686   22139 main.go:141] libmachine: (ha-919901-m02)     <boot dev='cdrom'/>
	I0812 10:37:28.911697   22139 main.go:141] libmachine: (ha-919901-m02)     <boot dev='hd'/>
	I0812 10:37:28.911707   22139 main.go:141] libmachine: (ha-919901-m02)     <bootmenu enable='no'/>
	I0812 10:37:28.911718   22139 main.go:141] libmachine: (ha-919901-m02)   </os>
	I0812 10:37:28.911728   22139 main.go:141] libmachine: (ha-919901-m02)   <devices>
	I0812 10:37:28.911739   22139 main.go:141] libmachine: (ha-919901-m02)     <disk type='file' device='cdrom'>
	I0812 10:37:28.911760   22139 main.go:141] libmachine: (ha-919901-m02)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/boot2docker.iso'/>
	I0812 10:37:28.911777   22139 main.go:141] libmachine: (ha-919901-m02)       <target dev='hdc' bus='scsi'/>
	I0812 10:37:28.911787   22139 main.go:141] libmachine: (ha-919901-m02)       <readonly/>
	I0812 10:37:28.911798   22139 main.go:141] libmachine: (ha-919901-m02)     </disk>
	I0812 10:37:28.911811   22139 main.go:141] libmachine: (ha-919901-m02)     <disk type='file' device='disk'>
	I0812 10:37:28.911824   22139 main.go:141] libmachine: (ha-919901-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:37:28.911840   22139 main.go:141] libmachine: (ha-919901-m02)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/ha-919901-m02.rawdisk'/>
	I0812 10:37:28.911856   22139 main.go:141] libmachine: (ha-919901-m02)       <target dev='hda' bus='virtio'/>
	I0812 10:37:28.911868   22139 main.go:141] libmachine: (ha-919901-m02)     </disk>
	I0812 10:37:28.911877   22139 main.go:141] libmachine: (ha-919901-m02)     <interface type='network'>
	I0812 10:37:28.911894   22139 main.go:141] libmachine: (ha-919901-m02)       <source network='mk-ha-919901'/>
	I0812 10:37:28.911905   22139 main.go:141] libmachine: (ha-919901-m02)       <model type='virtio'/>
	I0812 10:37:28.911925   22139 main.go:141] libmachine: (ha-919901-m02)     </interface>
	I0812 10:37:28.911940   22139 main.go:141] libmachine: (ha-919901-m02)     <interface type='network'>
	I0812 10:37:28.911951   22139 main.go:141] libmachine: (ha-919901-m02)       <source network='default'/>
	I0812 10:37:28.911962   22139 main.go:141] libmachine: (ha-919901-m02)       <model type='virtio'/>
	I0812 10:37:28.911974   22139 main.go:141] libmachine: (ha-919901-m02)     </interface>
	I0812 10:37:28.911985   22139 main.go:141] libmachine: (ha-919901-m02)     <serial type='pty'>
	I0812 10:37:28.911997   22139 main.go:141] libmachine: (ha-919901-m02)       <target port='0'/>
	I0812 10:37:28.912011   22139 main.go:141] libmachine: (ha-919901-m02)     </serial>
	I0812 10:37:28.912024   22139 main.go:141] libmachine: (ha-919901-m02)     <console type='pty'>
	I0812 10:37:28.912036   22139 main.go:141] libmachine: (ha-919901-m02)       <target type='serial' port='0'/>
	I0812 10:37:28.912048   22139 main.go:141] libmachine: (ha-919901-m02)     </console>
	I0812 10:37:28.912059   22139 main.go:141] libmachine: (ha-919901-m02)     <rng model='virtio'>
	I0812 10:37:28.912071   22139 main.go:141] libmachine: (ha-919901-m02)       <backend model='random'>/dev/random</backend>
	I0812 10:37:28.912085   22139 main.go:141] libmachine: (ha-919901-m02)     </rng>
	I0812 10:37:28.912097   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.912103   22139 main.go:141] libmachine: (ha-919901-m02)     
	I0812 10:37:28.912113   22139 main.go:141] libmachine: (ha-919901-m02)   </devices>
	I0812 10:37:28.912122   22139 main.go:141] libmachine: (ha-919901-m02) </domain>
	I0812 10:37:28.912134   22139 main.go:141] libmachine: (ha-919901-m02) 
	I0812 10:37:28.919566   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:8c:1d:03 in network default
	I0812 10:37:28.920179   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring networks are active...
	I0812 10:37:28.920198   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:28.920934   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring network default is active
	I0812 10:37:28.921183   22139 main.go:141] libmachine: (ha-919901-m02) Ensuring network mk-ha-919901 is active
	I0812 10:37:28.921528   22139 main.go:141] libmachine: (ha-919901-m02) Getting domain xml...
	I0812 10:37:28.922191   22139 main.go:141] libmachine: (ha-919901-m02) Creating domain...
	I0812 10:37:30.153706   22139 main.go:141] libmachine: (ha-919901-m02) Waiting to get IP...
	I0812 10:37:30.154606   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.154983   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.155023   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.154974   22539 retry.go:31] will retry after 288.98178ms: waiting for machine to come up
	I0812 10:37:30.445696   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.446231   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.446256   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.446189   22539 retry.go:31] will retry after 236.090765ms: waiting for machine to come up
	I0812 10:37:30.683850   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:30.684299   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:30.684325   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:30.684259   22539 retry.go:31] will retry after 430.221058ms: waiting for machine to come up
	I0812 10:37:31.115951   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:31.116471   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:31.116494   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:31.116403   22539 retry.go:31] will retry after 416.1691ms: waiting for machine to come up
	I0812 10:37:31.533738   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:31.534279   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:31.534308   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:31.534240   22539 retry.go:31] will retry after 697.888434ms: waiting for machine to come up
	I0812 10:37:32.235212   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:32.236071   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:32.236102   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:32.236024   22539 retry.go:31] will retry after 840.769999ms: waiting for machine to come up
	I0812 10:37:33.078146   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:33.078614   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:33.078637   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:33.078574   22539 retry.go:31] will retry after 933.572158ms: waiting for machine to come up
	I0812 10:37:34.014056   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:34.014359   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:34.014381   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:34.014321   22539 retry.go:31] will retry after 1.271180368s: waiting for machine to come up
	I0812 10:37:35.287618   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:35.288006   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:35.288028   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:35.287966   22539 retry.go:31] will retry after 1.697317183s: waiting for machine to come up
	I0812 10:37:36.986948   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:36.987355   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:36.987427   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:36.987314   22539 retry.go:31] will retry after 2.104575739s: waiting for machine to come up
	I0812 10:37:39.093432   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:39.093883   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:39.093911   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:39.093839   22539 retry.go:31] will retry after 2.180330285s: waiting for machine to come up
	I0812 10:37:41.277251   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:41.277754   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:41.277782   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:41.277682   22539 retry.go:31] will retry after 3.39047776s: waiting for machine to come up
	I0812 10:37:44.670256   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:44.670796   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find current IP address of domain ha-919901-m02 in network mk-ha-919901
	I0812 10:37:44.670824   22139 main.go:141] libmachine: (ha-919901-m02) DBG | I0812 10:37:44.670757   22539 retry.go:31] will retry after 4.366154175s: waiting for machine to come up
	I0812 10:37:49.038704   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.039253   22139 main.go:141] libmachine: (ha-919901-m02) Found IP for machine: 192.168.39.139
	I0812 10:37:49.039288   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has current primary IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.039298   22139 main.go:141] libmachine: (ha-919901-m02) Reserving static IP address...
	I0812 10:37:49.039779   22139 main.go:141] libmachine: (ha-919901-m02) DBG | unable to find host DHCP lease matching {name: "ha-919901-m02", mac: "52:54:00:aa:34:35", ip: "192.168.39.139"} in network mk-ha-919901
	I0812 10:37:49.117017   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Getting to WaitForSSH function...
	I0812 10:37:49.117049   22139 main.go:141] libmachine: (ha-919901-m02) Reserved static IP address: 192.168.39.139
	I0812 10:37:49.117063   22139 main.go:141] libmachine: (ha-919901-m02) Waiting for SSH to be available...
	I0812 10:37:49.119789   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.120270   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.120297   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.120506   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using SSH client type: external
	I0812 10:37:49.120535   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa (-rw-------)
	I0812 10:37:49.120567   22139 main.go:141] libmachine: (ha-919901-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:37:49.120604   22139 main.go:141] libmachine: (ha-919901-m02) DBG | About to run SSH command:
	I0812 10:37:49.120621   22139 main.go:141] libmachine: (ha-919901-m02) DBG | exit 0
	I0812 10:37:49.240732   22139 main.go:141] libmachine: (ha-919901-m02) DBG | SSH cmd err, output: <nil>: 
	I0812 10:37:49.241012   22139 main.go:141] libmachine: (ha-919901-m02) KVM machine creation complete!
	I0812 10:37:49.241324   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:49.241891   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:49.242080   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:49.242197   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:37:49.242214   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:37:49.243430   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:37:49.243449   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:37:49.243454   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:37:49.243460   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.245554   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.245945   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.245989   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.245995   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.246157   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.246323   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.246463   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.246611   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.246800   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.246817   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:37:49.340024   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:37:49.340067   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:37:49.340078   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.342907   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.343316   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.343340   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.343612   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.343843   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.344017   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.344151   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.344282   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.344438   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.344450   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:37:49.445619   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:37:49.445723   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:37:49.445741   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:37:49.445751   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.445990   22139 buildroot.go:166] provisioning hostname "ha-919901-m02"
	I0812 10:37:49.446016   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.446197   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.449003   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.449464   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.449486   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.449707   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.449925   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.450085   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.450223   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.450395   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.450550   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.450563   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901-m02 && echo "ha-919901-m02" | sudo tee /etc/hostname
	I0812 10:37:49.568615   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901-m02
	
	I0812 10:37:49.568637   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.571358   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.571725   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.571756   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.571931   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.572123   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.572308   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.572450   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.572601   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:49.572771   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:49.572792   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:37:49.678025   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:37:49.678058   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:37:49.678077   22139 buildroot.go:174] setting up certificates
	I0812 10:37:49.678086   22139 provision.go:84] configureAuth start
	I0812 10:37:49.678097   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetMachineName
	I0812 10:37:49.678391   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:49.681793   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.682166   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.682197   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.682378   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.684949   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.685438   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.685462   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.685710   22139 provision.go:143] copyHostCerts
	I0812 10:37:49.685747   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:37:49.685779   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:37:49.685788   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:37:49.685851   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:37:49.685958   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:37:49.685987   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:37:49.685993   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:37:49.686033   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:37:49.686112   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:37:49.686150   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:37:49.686158   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:37:49.686194   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:37:49.686333   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901-m02 san=[127.0.0.1 192.168.39.139 ha-919901-m02 localhost minikube]
	I0812 10:37:49.869783   22139 provision.go:177] copyRemoteCerts
	I0812 10:37:49.869853   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:37:49.869882   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:49.872784   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.873171   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:49.873206   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:49.873428   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:49.873641   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:49.873842   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:49.873998   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:49.951239   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:37:49.951308   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:37:49.974833   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:37:49.974900   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:37:49.999209   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:37:49.999298   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:37:50.023782   22139 provision.go:87] duration metric: took 345.685308ms to configureAuth
	I0812 10:37:50.023811   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:37:50.024049   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:50.024145   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.026812   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.027203   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.027236   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.027385   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.027601   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.027802   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.027923   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.028141   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:50.028385   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:50.028411   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:37:50.281325   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:37:50.281357   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:37:50.281368   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetURL
	I0812 10:37:50.282640   22139 main.go:141] libmachine: (ha-919901-m02) DBG | Using libvirt version 6000000
	I0812 10:37:50.285281   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.285705   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.285735   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.285873   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:37:50.285889   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:37:50.285895   22139 client.go:171] duration metric: took 21.781744157s to LocalClient.Create
	I0812 10:37:50.285917   22139 start.go:167] duration metric: took 21.781823399s to libmachine.API.Create "ha-919901"
	I0812 10:37:50.285925   22139 start.go:293] postStartSetup for "ha-919901-m02" (driver="kvm2")
	I0812 10:37:50.285935   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:37:50.285962   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.286214   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:37:50.286236   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.288506   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.288886   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.288914   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.289069   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.289245   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.289441   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.289580   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.366975   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:37:50.370963   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:37:50.370989   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:37:50.371057   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:37:50.371159   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:37:50.371173   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:37:50.371282   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:37:50.381168   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:50.405187   22139 start.go:296] duration metric: took 119.249935ms for postStartSetup
	I0812 10:37:50.405244   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetConfigRaw
	I0812 10:37:50.405847   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:50.408849   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.409229   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.409251   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.409509   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:37:50.409710   22139 start.go:128] duration metric: took 21.925281715s to createHost
	I0812 10:37:50.409733   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.411955   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.412255   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.412285   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.412412   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.412629   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.412777   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.412922   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.413104   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:37:50.413271   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0812 10:37:50.413282   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:37:50.509662   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459070.484863706
	
	I0812 10:37:50.509685   22139 fix.go:216] guest clock: 1723459070.484863706
	I0812 10:37:50.509693   22139 fix.go:229] Guest: 2024-08-12 10:37:50.484863706 +0000 UTC Remote: 2024-08-12 10:37:50.409722022 +0000 UTC m=+74.193899662 (delta=75.141684ms)
	I0812 10:37:50.509708   22139 fix.go:200] guest clock delta is within tolerance: 75.141684ms
	I0812 10:37:50.509713   22139 start.go:83] releasing machines lock for "ha-919901-m02", held for 22.02540096s
	I0812 10:37:50.509731   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.510014   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:50.512753   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.513153   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.513179   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.515749   22139 out.go:177] * Found network options:
	I0812 10:37:50.517211   22139 out.go:177]   - NO_PROXY=192.168.39.5
	W0812 10:37:50.518655   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:37:50.518689   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519289   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519560   22139 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:37:50.519586   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:37:50.519625   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	W0812 10:37:50.519837   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:37:50.519910   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:37:50.519928   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:37:50.522516   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.522799   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.522936   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.522961   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.523088   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.523175   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:50.523199   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:50.523239   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.523418   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.523420   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:37:50.523595   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:37:50.523607   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.523723   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:37:50.523872   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:37:50.755016   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:37:50.760527   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:37:50.760593   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:37:50.776992   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:37:50.777014   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:37:50.777083   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:37:50.795454   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:37:50.809504   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:37:50.809570   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:37:50.823556   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:37:50.837623   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:37:50.959183   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:37:51.110686   22139 docker.go:233] disabling docker service ...
	I0812 10:37:51.110759   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:37:51.124966   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:37:51.137913   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:37:51.279757   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:37:51.412131   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:37:51.427898   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:37:51.447921   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:37:51.447980   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.459496   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:37:51.459550   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.471100   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.482858   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.494998   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:37:51.506745   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.518790   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.535691   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:37:51.546757   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:37:51.556586   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:37:51.556654   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:37:51.569752   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:37:51.580547   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:37:51.693279   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:37:51.832904   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:37:51.832980   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:37:51.837394   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:37:51.837457   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:37:51.841299   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:37:51.880357   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:37:51.880424   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:37:51.910678   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:37:51.941770   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:37:51.943452   22139 out.go:177]   - env NO_PROXY=192.168.39.5
	I0812 10:37:51.944794   22139 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:37:51.947576   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:51.947933   22139 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:42 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:37:51.947969   22139 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:37:51.948192   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:37:51.952212   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:37:51.964039   22139 mustload.go:65] Loading cluster: ha-919901
	I0812 10:37:51.964238   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:37:51.964513   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:51.964538   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:51.979245   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0812 10:37:51.979712   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:51.980167   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:51.980190   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:51.980466   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:51.980643   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:37:51.982290   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:37:51.982690   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:37:51.982722   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:37:51.997855   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0812 10:37:51.998260   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:37:51.998861   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:37:51.998881   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:37:51.999213   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:37:51.999399   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:37:51.999584   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.139
	I0812 10:37:51.999595   22139 certs.go:194] generating shared ca certs ...
	I0812 10:37:51.999612   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:51.999729   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:37:51.999769   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:37:51.999781   22139 certs.go:256] generating profile certs ...
	I0812 10:37:51.999865   22139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:37:51.999888   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f
	I0812 10:37:51.999902   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.254]
	I0812 10:37:52.103250   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f ...
	I0812 10:37:52.103277   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f: {Name:mke462d4f0c27362085929f70613afd49818b647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:52.103437   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f ...
	I0812 10:37:52.103449   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f: {Name:mk18c46c24dd2af2af961266b2e619e3af1f3a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:37:52.103513   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.e79e017f -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:37:52.103662   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.e79e017f -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:37:52.103798   22139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:37:52.103816   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:37:52.103831   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:37:52.103843   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:37:52.103855   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:37:52.103865   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:37:52.103877   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:37:52.103888   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:37:52.103902   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:37:52.103949   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:37:52.103979   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:37:52.103989   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:37:52.104013   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:37:52.104035   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:37:52.104059   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:37:52.104100   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:37:52.104125   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.104139   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.104151   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.104178   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:37:52.107325   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:52.107794   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:37:52.107832   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:37:52.107982   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:37:52.108158   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:37:52.108270   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:37:52.108367   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:37:52.181362   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 10:37:52.185983   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 10:37:52.196883   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 10:37:52.201493   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0812 10:37:52.212532   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 10:37:52.217010   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 10:37:52.228117   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 10:37:52.232146   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 10:37:52.243051   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 10:37:52.247288   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 10:37:52.257695   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 10:37:52.262001   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 10:37:52.273500   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:37:52.300730   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:37:52.324084   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:37:52.348251   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:37:52.371770   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 10:37:52.395047   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:37:52.417533   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:37:52.440439   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:37:52.463551   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:37:52.490025   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:37:52.514468   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:37:52.538852   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 10:37:52.556272   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0812 10:37:52.572785   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 10:37:52.589420   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 10:37:52.605303   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 10:37:52.622281   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 10:37:52.638097   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 10:37:52.654849   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:37:52.660503   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:37:52.670964   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.675215   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.675267   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:37:52.680886   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:37:52.691217   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:37:52.702203   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.706268   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.706329   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:37:52.711765   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:37:52.722023   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:37:52.732135   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.736291   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.736353   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:37:52.741886   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:37:52.752267   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:37:52.756072   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:37:52.756123   22139 kubeadm.go:934] updating node {m02 192.168.39.139 8443 v1.30.3 crio true true} ...
	I0812 10:37:52.756200   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:37:52.756225   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:37:52.756258   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:37:52.772983   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:37:52.773043   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:37:52.773091   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:52.782547   22139 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 10:37:52.782618   22139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 10:37:52.792186   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 10:37:52.792219   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:37:52.792246   22139 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0812 10:37:52.792287   22139 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0812 10:37:52.792299   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:37:52.797070   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 10:37:52.797105   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 10:37:56.387092   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:37:56.387185   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:37:56.391994   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 10:37:56.392032   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 10:38:07.410733   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:38:07.426761   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:38:07.426856   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:38:07.431668   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 10:38:07.431707   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 10:38:07.816979   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 10:38:07.826989   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:38:07.843396   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:38:07.860325   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:38:07.876513   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:38:07.880379   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:38:07.892322   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:38:08.015488   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:38:08.033052   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:38:08.033474   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:38:08.033513   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:38:08.048583   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0812 10:38:08.049094   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:38:08.049629   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:38:08.049652   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:38:08.049967   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:38:08.050179   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:38:08.050319   22139 start.go:317] joinCluster: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:38:08.050436   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 10:38:08.050458   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:38:08.053750   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:38:08.054113   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:38:08.054157   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:38:08.054311   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:38:08.054516   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:38:08.054670   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:38:08.054843   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:38:08.210441   22139 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:08.210483   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3km7df.rl0mno282pd477ol --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m02 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443"
	I0812 10:38:31.080328   22139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3km7df.rl0mno282pd477ol --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m02 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443": (22.869804459s)
	I0812 10:38:31.080363   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 10:38:31.624619   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901-m02 minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=false
	I0812 10:38:31.746083   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-919901-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 10:38:31.905406   22139 start.go:319] duration metric: took 23.85508197s to joinCluster
	I0812 10:38:31.905474   22139 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:31.905822   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:38:31.907125   22139 out.go:177] * Verifying Kubernetes components...
	I0812 10:38:31.908554   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:38:32.179187   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:38:32.225563   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:38:32.225828   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 10:38:32.225893   22139 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0812 10:38:32.226113   22139 node_ready.go:35] waiting up to 6m0s for node "ha-919901-m02" to be "Ready" ...
	I0812 10:38:32.226206   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:32.226220   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:32.226231   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:32.226243   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:32.241504   22139 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0812 10:38:32.726307   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:32.726335   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:32.726346   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:32.726352   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:32.732174   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:33.226656   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:33.226678   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:33.226691   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:33.226695   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:33.232112   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:33.726785   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:33.726809   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:33.726818   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:33.726823   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:33.730913   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:34.227085   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:34.227111   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:34.227120   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:34.227126   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:34.230851   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:34.231447   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:34.726991   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:34.727018   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:34.727030   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:34.727038   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:34.730671   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:35.226808   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:35.226828   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:35.226835   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:35.226839   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:35.231030   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:35.726413   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:35.726449   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:35.726457   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:35.726462   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:35.730777   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:36.226375   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:36.226400   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:36.226418   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:36.226424   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:36.230030   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:36.726881   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:36.726908   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:36.726918   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:36.726924   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:36.730189   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:36.730849   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:37.227210   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:37.227233   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:37.227240   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:37.227244   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:37.230741   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:37.726942   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:37.726966   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:37.726976   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:37.726981   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:37.738632   22139 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0812 10:38:38.227025   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:38.227046   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:38.227054   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:38.227058   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:38.230254   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:38.726657   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:38.726697   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:38.726709   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:38.726714   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:38.732549   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:38.733426   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:39.226871   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:39.226890   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:39.226898   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:39.226903   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:39.229835   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:39.726495   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:39.726518   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:39.726526   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:39.726530   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:39.729687   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:40.226620   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:40.226646   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:40.226656   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:40.226662   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:40.229679   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:40.726552   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:40.726575   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:40.726583   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:40.726588   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:40.729769   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:41.226663   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:41.226690   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:41.226702   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:41.226707   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:41.236833   22139 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0812 10:38:41.237603   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:41.726563   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:41.726589   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:41.726601   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:41.726608   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:41.729990   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:42.227153   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:42.227182   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:42.227193   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:42.227198   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:42.230829   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:42.726660   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:42.726688   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:42.726696   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:42.726699   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:42.730345   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.226482   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:43.226507   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:43.226517   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:43.226523   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:43.229844   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.727251   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:43.727274   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:43.727282   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:43.727286   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:43.731023   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:43.731816   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:44.227276   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:44.227305   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:44.227316   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:44.227323   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:44.230564   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:44.726581   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:44.726612   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:44.726623   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:44.726628   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:44.729746   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:45.226791   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:45.226819   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:45.226827   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:45.226833   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:45.230032   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:45.727207   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:45.727231   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:45.727239   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:45.727243   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:45.730471   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:46.226474   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:46.226503   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:46.226512   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:46.226516   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:46.229727   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:46.230324   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:46.726377   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:46.726401   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:46.726408   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:46.726413   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:46.729512   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:47.226695   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:47.226724   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:47.226734   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:47.226738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:47.230484   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:47.726798   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:47.726829   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:47.726838   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:47.726841   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:47.730707   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:48.227105   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:48.227128   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:48.227136   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:48.227141   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:48.230801   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:48.231561   22139 node_ready.go:53] node "ha-919901-m02" has status "Ready":"False"
	I0812 10:38:48.726420   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:48.726445   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:48.726455   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:48.726461   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:48.730193   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.226336   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.226360   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.226368   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.226372   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.229915   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.726978   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.727001   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.727010   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.727014   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.730144   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.730713   22139 node_ready.go:49] node "ha-919901-m02" has status "Ready":"True"
	I0812 10:38:49.730731   22139 node_ready.go:38] duration metric: took 17.50460046s for node "ha-919901-m02" to be "Ready" ...
	I0812 10:38:49.730739   22139 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:38:49.730797   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:49.730804   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.730812   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.730822   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.735736   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:49.741879   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.741983   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rc7cl
	I0812 10:38:49.741994   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.742005   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.742013   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.745764   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.746718   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.746735   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.746748   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.746753   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.749207   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.749644   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.749660   22139 pod_ready.go:81] duration metric: took 7.755653ms for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.749670   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.749718   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wstd4
	I0812 10:38:49.749725   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.749732   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.749738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.752354   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.753169   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.753187   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.753197   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.753200   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.756221   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:49.756972   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.756989   22139 pod_ready.go:81] duration metric: took 7.312835ms for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.756998   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.757054   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901
	I0812 10:38:49.757063   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.757070   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.757074   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.759711   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.760409   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:49.760421   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.760428   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.760431   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.763367   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.763803   22139 pod_ready.go:92] pod "etcd-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.763821   22139 pod_ready.go:81] duration metric: took 6.817376ms for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.763831   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.763903   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m02
	I0812 10:38:49.763913   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.763919   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.763922   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.766801   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.767604   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:49.767620   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.767636   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.767640   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.770437   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:38:49.770792   22139 pod_ready.go:92] pod "etcd-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:49.770808   22139 pod_ready.go:81] duration metric: took 6.970572ms for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.770821   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:49.927159   22139 request.go:629] Waited for 156.277068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:38:49.927255   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:38:49.927267   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:49.927278   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:49.927289   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:49.930631   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.127617   22139 request.go:629] Waited for 196.417628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.127710   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.127719   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.127728   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.127734   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.131094   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.131662   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.131684   22139 pod_ready.go:81] duration metric: took 360.85671ms for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.131693   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.327667   22139 request.go:629] Waited for 195.895295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:38:50.327727   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:38:50.327732   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.327739   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.327744   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.330866   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.527854   22139 request.go:629] Waited for 196.367698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:50.527919   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:50.527947   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.527958   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.527966   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.532132   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:50.533005   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.533025   22139 pod_ready.go:81] duration metric: took 401.325416ms for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.533034   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.727037   22139 request.go:629] Waited for 193.930717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:38:50.727094   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:38:50.727099   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.727109   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.727115   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.730807   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.927730   22139 request.go:629] Waited for 196.334188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.927804   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:50.927810   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:50.927817   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:50.927820   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:50.931132   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:50.931685   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:50.931707   22139 pod_ready.go:81] duration metric: took 398.666953ms for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:50.931716   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.127764   22139 request.go:629] Waited for 195.969056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:38:51.127829   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:38:51.127836   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.127847   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.127855   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.131164   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.326963   22139 request.go:629] Waited for 195.080527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.327036   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.327042   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.327050   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.327054   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.331212   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:51.331666   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:51.331686   22139 pod_ready.go:81] duration metric: took 399.963516ms for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.331696   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.527131   22139 request.go:629] Waited for 195.356334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:38:51.527194   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:38:51.527202   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.527213   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.527221   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.530551   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.727563   22139 request.go:629] Waited for 196.347965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.727635   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:51.727641   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.727648   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.727652   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.730969   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:51.731393   22139 pod_ready.go:92] pod "kube-proxy-cczfj" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:51.731411   22139 pod_ready.go:81] duration metric: took 399.709277ms for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.731420   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:51.927584   22139 request.go:629] Waited for 196.106818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:38:51.927654   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:38:51.927661   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:51.927671   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:51.927675   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:51.931432   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.127492   22139 request.go:629] Waited for 195.483215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.127565   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.127572   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.127582   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.127591   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.131126   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.131914   22139 pod_ready.go:92] pod "kube-proxy-ftvfl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.131934   22139 pod_ready.go:81] duration metric: took 400.509323ms for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.131943   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.328036   22139 request.go:629] Waited for 196.023184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:38:52.328118   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:38:52.328126   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.328136   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.328143   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.331516   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.527368   22139 request.go:629] Waited for 195.356406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.527442   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:38:52.527447   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.527454   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.527458   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.531233   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.531867   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.531886   22139 pod_ready.go:81] duration metric: took 399.936973ms for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.531897   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.727059   22139 request.go:629] Waited for 195.088541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:38:52.727166   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:38:52.727178   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.727189   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.727201   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.731062   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.928053   22139 request.go:629] Waited for 196.421191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:52.928132   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:38:52.928140   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.928151   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.928156   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.931935   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:38:52.932683   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:38:52.932704   22139 pod_ready.go:81] duration metric: took 400.799965ms for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:38:52.932715   22139 pod_ready.go:38] duration metric: took 3.20196498s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:38:52.932730   22139 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:38:52.932788   22139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:38:52.948888   22139 api_server.go:72] duration metric: took 21.043379284s to wait for apiserver process to appear ...
	I0812 10:38:52.948914   22139 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:38:52.948932   22139 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0812 10:38:52.953103   22139 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0812 10:38:52.953162   22139 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0812 10:38:52.953167   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:52.953175   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:52.953184   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:52.954149   22139 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0812 10:38:52.954246   22139 api_server.go:141] control plane version: v1.30.3
	I0812 10:38:52.954261   22139 api_server.go:131] duration metric: took 5.341963ms to wait for apiserver health ...
	I0812 10:38:52.954269   22139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:38:53.127956   22139 request.go:629] Waited for 173.629365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.128015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.128021   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.128031   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.128037   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.133390   22139 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 10:38:53.137526   22139 system_pods.go:59] 17 kube-system pods found
	I0812 10:38:53.137564   22139 system_pods.go:61] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:38:53.137569   22139 system_pods.go:61] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:38:53.137573   22139 system_pods.go:61] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:38:53.137577   22139 system_pods.go:61] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:38:53.137580   22139 system_pods.go:61] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:38:53.137583   22139 system_pods.go:61] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:38:53.137587   22139 system_pods.go:61] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:38:53.137590   22139 system_pods.go:61] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:38:53.137593   22139 system_pods.go:61] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:38:53.137596   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:38:53.137599   22139 system_pods.go:61] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:38:53.137602   22139 system_pods.go:61] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:38:53.137605   22139 system_pods.go:61] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:38:53.137608   22139 system_pods.go:61] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:38:53.137611   22139 system_pods.go:61] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:38:53.137615   22139 system_pods.go:61] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:38:53.137622   22139 system_pods.go:61] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:38:53.137630   22139 system_pods.go:74] duration metric: took 183.354956ms to wait for pod list to return data ...
	I0812 10:38:53.137644   22139 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:38:53.327062   22139 request.go:629] Waited for 189.323961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:38:53.327126   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:38:53.327133   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.327144   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.327148   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.331496   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:53.331781   22139 default_sa.go:45] found service account: "default"
	I0812 10:38:53.331805   22139 default_sa.go:55] duration metric: took 194.152257ms for default service account to be created ...
	I0812 10:38:53.331816   22139 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:38:53.527422   22139 request.go:629] Waited for 195.539325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.527490   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:38:53.527495   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.527502   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.527506   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.533723   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:38:53.537850   22139 system_pods.go:86] 17 kube-system pods found
	I0812 10:38:53.537879   22139 system_pods.go:89] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:38:53.537884   22139 system_pods.go:89] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:38:53.537893   22139 system_pods.go:89] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:38:53.537897   22139 system_pods.go:89] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:38:53.537901   22139 system_pods.go:89] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:38:53.537905   22139 system_pods.go:89] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:38:53.537909   22139 system_pods.go:89] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:38:53.537913   22139 system_pods.go:89] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:38:53.537917   22139 system_pods.go:89] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:38:53.537921   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:38:53.537926   22139 system_pods.go:89] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:38:53.537935   22139 system_pods.go:89] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:38:53.537941   22139 system_pods.go:89] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:38:53.537947   22139 system_pods.go:89] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:38:53.537955   22139 system_pods.go:89] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:38:53.537962   22139 system_pods.go:89] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:38:53.537971   22139 system_pods.go:89] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:38:53.537978   22139 system_pods.go:126] duration metric: took 206.157149ms to wait for k8s-apps to be running ...
	I0812 10:38:53.537987   22139 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:38:53.538030   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:38:53.553266   22139 system_svc.go:56] duration metric: took 15.26828ms WaitForService to wait for kubelet
	I0812 10:38:53.553295   22139 kubeadm.go:582] duration metric: took 21.647791829s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:38:53.553316   22139 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:38:53.727714   22139 request.go:629] Waited for 174.32901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0812 10:38:53.727770   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0812 10:38:53.727775   22139 round_trippers.go:469] Request Headers:
	I0812 10:38:53.727782   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:38:53.727786   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:38:53.732104   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:38:53.733158   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:38:53.733182   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:38:53.733201   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:38:53.733205   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:38:53.733209   22139 node_conditions.go:105] duration metric: took 179.887884ms to run NodePressure ...
	I0812 10:38:53.733227   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:38:53.733261   22139 start.go:255] writing updated cluster config ...
	I0812 10:38:53.735677   22139 out.go:177] 
	I0812 10:38:53.737271   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:38:53.737407   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:38:53.739264   22139 out.go:177] * Starting "ha-919901-m03" control-plane node in "ha-919901" cluster
	I0812 10:38:53.740850   22139 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:38:53.740902   22139 cache.go:56] Caching tarball of preloaded images
	I0812 10:38:53.741013   22139 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:38:53.741029   22139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:38:53.741144   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:38:53.741371   22139 start.go:360] acquireMachinesLock for ha-919901-m03: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:38:53.741418   22139 start.go:364] duration metric: took 26.493µs to acquireMachinesLock for "ha-919901-m03"
	I0812 10:38:53.741441   22139 start.go:93] Provisioning new machine with config: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:38:53.741573   22139 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0812 10:38:53.743401   22139 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 10:38:53.743491   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:38:53.743524   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:38:53.758500   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0812 10:38:53.758936   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:38:53.759439   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:38:53.759461   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:38:53.759847   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:38:53.760039   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:38:53.760203   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:38:53.760400   22139 start.go:159] libmachine.API.Create for "ha-919901" (driver="kvm2")
	I0812 10:38:53.760425   22139 client.go:168] LocalClient.Create starting
	I0812 10:38:53.760456   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 10:38:53.760488   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:38:53.760503   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:38:53.760550   22139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 10:38:53.760568   22139 main.go:141] libmachine: Decoding PEM data...
	I0812 10:38:53.760581   22139 main.go:141] libmachine: Parsing certificate...
	I0812 10:38:53.760599   22139 main.go:141] libmachine: Running pre-create checks...
	I0812 10:38:53.760607   22139 main.go:141] libmachine: (ha-919901-m03) Calling .PreCreateCheck
	I0812 10:38:53.760845   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:38:53.761340   22139 main.go:141] libmachine: Creating machine...
	I0812 10:38:53.761353   22139 main.go:141] libmachine: (ha-919901-m03) Calling .Create
	I0812 10:38:53.761491   22139 main.go:141] libmachine: (ha-919901-m03) Creating KVM machine...
	I0812 10:38:53.762838   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found existing default KVM network
	I0812 10:38:53.762960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found existing private KVM network mk-ha-919901
	I0812 10:38:53.763143   22139 main.go:141] libmachine: (ha-919901-m03) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 ...
	I0812 10:38:53.763170   22139 main.go:141] libmachine: (ha-919901-m03) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:38:53.763238   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:53.763134   23028 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:38:53.763388   22139 main.go:141] libmachine: (ha-919901-m03) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 10:38:53.996979   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:53.996832   23028 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa...
	I0812 10:38:54.081688   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:54.081557   23028 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/ha-919901-m03.rawdisk...
	I0812 10:38:54.081714   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Writing magic tar header
	I0812 10:38:54.081729   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Writing SSH key tar header
	I0812 10:38:54.081742   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:54.081686   23028 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 ...
	I0812 10:38:54.081770   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03
	I0812 10:38:54.081830   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03 (perms=drwx------)
	I0812 10:38:54.081849   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 10:38:54.081858   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 10:38:54.081868   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:38:54.081885   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 10:38:54.081896   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 10:38:54.081910   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 10:38:54.081920   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home/jenkins
	I0812 10:38:54.081930   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Checking permissions on dir: /home
	I0812 10:38:54.081941   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Skipping /home - not owner
	I0812 10:38:54.081955   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 10:38:54.081967   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 10:38:54.082002   22139 main.go:141] libmachine: (ha-919901-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 10:38:54.082027   22139 main.go:141] libmachine: (ha-919901-m03) Creating domain...
	I0812 10:38:54.082952   22139 main.go:141] libmachine: (ha-919901-m03) define libvirt domain using xml: 
	I0812 10:38:54.082970   22139 main.go:141] libmachine: (ha-919901-m03) <domain type='kvm'>
	I0812 10:38:54.082977   22139 main.go:141] libmachine: (ha-919901-m03)   <name>ha-919901-m03</name>
	I0812 10:38:54.082986   22139 main.go:141] libmachine: (ha-919901-m03)   <memory unit='MiB'>2200</memory>
	I0812 10:38:54.082991   22139 main.go:141] libmachine: (ha-919901-m03)   <vcpu>2</vcpu>
	I0812 10:38:54.083000   22139 main.go:141] libmachine: (ha-919901-m03)   <features>
	I0812 10:38:54.083005   22139 main.go:141] libmachine: (ha-919901-m03)     <acpi/>
	I0812 10:38:54.083012   22139 main.go:141] libmachine: (ha-919901-m03)     <apic/>
	I0812 10:38:54.083017   22139 main.go:141] libmachine: (ha-919901-m03)     <pae/>
	I0812 10:38:54.083025   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083030   22139 main.go:141] libmachine: (ha-919901-m03)   </features>
	I0812 10:38:54.083035   22139 main.go:141] libmachine: (ha-919901-m03)   <cpu mode='host-passthrough'>
	I0812 10:38:54.083041   22139 main.go:141] libmachine: (ha-919901-m03)   
	I0812 10:38:54.083052   22139 main.go:141] libmachine: (ha-919901-m03)   </cpu>
	I0812 10:38:54.083078   22139 main.go:141] libmachine: (ha-919901-m03)   <os>
	I0812 10:38:54.083101   22139 main.go:141] libmachine: (ha-919901-m03)     <type>hvm</type>
	I0812 10:38:54.083112   22139 main.go:141] libmachine: (ha-919901-m03)     <boot dev='cdrom'/>
	I0812 10:38:54.083124   22139 main.go:141] libmachine: (ha-919901-m03)     <boot dev='hd'/>
	I0812 10:38:54.083134   22139 main.go:141] libmachine: (ha-919901-m03)     <bootmenu enable='no'/>
	I0812 10:38:54.083145   22139 main.go:141] libmachine: (ha-919901-m03)   </os>
	I0812 10:38:54.083167   22139 main.go:141] libmachine: (ha-919901-m03)   <devices>
	I0812 10:38:54.083187   22139 main.go:141] libmachine: (ha-919901-m03)     <disk type='file' device='cdrom'>
	I0812 10:38:54.083202   22139 main.go:141] libmachine: (ha-919901-m03)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/boot2docker.iso'/>
	I0812 10:38:54.083213   22139 main.go:141] libmachine: (ha-919901-m03)       <target dev='hdc' bus='scsi'/>
	I0812 10:38:54.083223   22139 main.go:141] libmachine: (ha-919901-m03)       <readonly/>
	I0812 10:38:54.083233   22139 main.go:141] libmachine: (ha-919901-m03)     </disk>
	I0812 10:38:54.083245   22139 main.go:141] libmachine: (ha-919901-m03)     <disk type='file' device='disk'>
	I0812 10:38:54.083262   22139 main.go:141] libmachine: (ha-919901-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 10:38:54.083279   22139 main.go:141] libmachine: (ha-919901-m03)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/ha-919901-m03.rawdisk'/>
	I0812 10:38:54.083290   22139 main.go:141] libmachine: (ha-919901-m03)       <target dev='hda' bus='virtio'/>
	I0812 10:38:54.083302   22139 main.go:141] libmachine: (ha-919901-m03)     </disk>
	I0812 10:38:54.083313   22139 main.go:141] libmachine: (ha-919901-m03)     <interface type='network'>
	I0812 10:38:54.083323   22139 main.go:141] libmachine: (ha-919901-m03)       <source network='mk-ha-919901'/>
	I0812 10:38:54.083333   22139 main.go:141] libmachine: (ha-919901-m03)       <model type='virtio'/>
	I0812 10:38:54.083345   22139 main.go:141] libmachine: (ha-919901-m03)     </interface>
	I0812 10:38:54.083356   22139 main.go:141] libmachine: (ha-919901-m03)     <interface type='network'>
	I0812 10:38:54.083370   22139 main.go:141] libmachine: (ha-919901-m03)       <source network='default'/>
	I0812 10:38:54.083380   22139 main.go:141] libmachine: (ha-919901-m03)       <model type='virtio'/>
	I0812 10:38:54.083391   22139 main.go:141] libmachine: (ha-919901-m03)     </interface>
	I0812 10:38:54.083401   22139 main.go:141] libmachine: (ha-919901-m03)     <serial type='pty'>
	I0812 10:38:54.083411   22139 main.go:141] libmachine: (ha-919901-m03)       <target port='0'/>
	I0812 10:38:54.083420   22139 main.go:141] libmachine: (ha-919901-m03)     </serial>
	I0812 10:38:54.083432   22139 main.go:141] libmachine: (ha-919901-m03)     <console type='pty'>
	I0812 10:38:54.083443   22139 main.go:141] libmachine: (ha-919901-m03)       <target type='serial' port='0'/>
	I0812 10:38:54.083453   22139 main.go:141] libmachine: (ha-919901-m03)     </console>
	I0812 10:38:54.083464   22139 main.go:141] libmachine: (ha-919901-m03)     <rng model='virtio'>
	I0812 10:38:54.083476   22139 main.go:141] libmachine: (ha-919901-m03)       <backend model='random'>/dev/random</backend>
	I0812 10:38:54.083488   22139 main.go:141] libmachine: (ha-919901-m03)     </rng>
	I0812 10:38:54.083498   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083507   22139 main.go:141] libmachine: (ha-919901-m03)     
	I0812 10:38:54.083517   22139 main.go:141] libmachine: (ha-919901-m03)   </devices>
	I0812 10:38:54.083528   22139 main.go:141] libmachine: (ha-919901-m03) </domain>
	I0812 10:38:54.083541   22139 main.go:141] libmachine: (ha-919901-m03) 
	I0812 10:38:54.090431   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:48:dd:bb in network default
	I0812 10:38:54.090921   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring networks are active...
	I0812 10:38:54.090948   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:54.091665   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring network default is active
	I0812 10:38:54.092020   22139 main.go:141] libmachine: (ha-919901-m03) Ensuring network mk-ha-919901 is active
	I0812 10:38:54.092425   22139 main.go:141] libmachine: (ha-919901-m03) Getting domain xml...
	I0812 10:38:54.093233   22139 main.go:141] libmachine: (ha-919901-m03) Creating domain...
	I0812 10:38:55.394561   22139 main.go:141] libmachine: (ha-919901-m03) Waiting to get IP...
	I0812 10:38:55.395496   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:55.395903   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:55.395961   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:55.395890   23028 retry.go:31] will retry after 248.022365ms: waiting for machine to come up
	I0812 10:38:55.645744   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:55.646146   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:55.646183   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:55.646096   23028 retry.go:31] will retry after 385.515989ms: waiting for machine to come up
	I0812 10:38:56.033819   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.034351   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.034379   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.034303   23028 retry.go:31] will retry after 394.859232ms: waiting for machine to come up
	I0812 10:38:56.430996   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.431612   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.431635   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.431557   23028 retry.go:31] will retry after 515.927915ms: waiting for machine to come up
	I0812 10:38:56.949288   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:56.949840   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:56.949873   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:56.949755   23028 retry.go:31] will retry after 615.89923ms: waiting for machine to come up
	I0812 10:38:57.567348   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:57.567863   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:57.567882   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:57.567815   23028 retry.go:31] will retry after 824.248304ms: waiting for machine to come up
	I0812 10:38:58.393522   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:58.394025   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:58.394053   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:58.393972   23028 retry.go:31] will retry after 903.663556ms: waiting for machine to come up
	I0812 10:38:59.299460   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:38:59.299991   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:38:59.300022   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:38:59.299956   23028 retry.go:31] will retry after 943.185292ms: waiting for machine to come up
	I0812 10:39:00.244291   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:00.244745   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:00.244774   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:00.244692   23028 retry.go:31] will retry after 1.75910003s: waiting for machine to come up
	I0812 10:39:02.006042   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:02.006370   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:02.006396   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:02.006341   23028 retry.go:31] will retry after 1.468388382s: waiting for machine to come up
	I0812 10:39:03.476095   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:03.476591   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:03.476623   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:03.476562   23028 retry.go:31] will retry after 2.072007383s: waiting for machine to come up
	I0812 10:39:05.550334   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:05.550976   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:05.551009   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:05.550923   23028 retry.go:31] will retry after 2.406978667s: waiting for machine to come up
	I0812 10:39:07.959093   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:07.959428   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:07.959458   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:07.959381   23028 retry.go:31] will retry after 4.191781323s: waiting for machine to come up
	I0812 10:39:12.154110   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:12.154496   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find current IP address of domain ha-919901-m03 in network mk-ha-919901
	I0812 10:39:12.154526   22139 main.go:141] libmachine: (ha-919901-m03) DBG | I0812 10:39:12.154461   23028 retry.go:31] will retry after 3.475577868s: waiting for machine to come up
	I0812 10:39:15.632234   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.632880   22139 main.go:141] libmachine: (ha-919901-m03) Found IP for machine: 192.168.39.195
	I0812 10:39:15.632905   22139 main.go:141] libmachine: (ha-919901-m03) Reserving static IP address...
	I0812 10:39:15.632920   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has current primary IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.633322   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find host DHCP lease matching {name: "ha-919901-m03", mac: "52:54:00:0f:9a:b2", ip: "192.168.39.195"} in network mk-ha-919901
	I0812 10:39:15.708534   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Getting to WaitForSSH function...
	I0812 10:39:15.708581   22139 main.go:141] libmachine: (ha-919901-m03) Reserved static IP address: 192.168.39.195
	I0812 10:39:15.708615   22139 main.go:141] libmachine: (ha-919901-m03) Waiting for SSH to be available...
	I0812 10:39:15.711497   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:15.711915   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901
	I0812 10:39:15.711943   22139 main.go:141] libmachine: (ha-919901-m03) DBG | unable to find defined IP address of network mk-ha-919901 interface with MAC address 52:54:00:0f:9a:b2
	I0812 10:39:15.712133   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH client type: external
	I0812 10:39:15.712161   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa (-rw-------)
	I0812 10:39:15.712188   22139 main.go:141] libmachine: (ha-919901-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:39:15.712200   22139 main.go:141] libmachine: (ha-919901-m03) DBG | About to run SSH command:
	I0812 10:39:15.712218   22139 main.go:141] libmachine: (ha-919901-m03) DBG | exit 0
	I0812 10:39:15.716992   22139 main.go:141] libmachine: (ha-919901-m03) DBG | SSH cmd err, output: exit status 255: 
	I0812 10:39:15.717011   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0812 10:39:15.717020   22139 main.go:141] libmachine: (ha-919901-m03) DBG | command : exit 0
	I0812 10:39:15.717025   22139 main.go:141] libmachine: (ha-919901-m03) DBG | err     : exit status 255
	I0812 10:39:15.717032   22139 main.go:141] libmachine: (ha-919901-m03) DBG | output  : 
	I0812 10:39:18.719150   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Getting to WaitForSSH function...
	I0812 10:39:18.722036   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.722549   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.722571   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.722744   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH client type: external
	I0812 10:39:18.722808   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa (-rw-------)
	I0812 10:39:18.722840   22139 main.go:141] libmachine: (ha-919901-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 10:39:18.722858   22139 main.go:141] libmachine: (ha-919901-m03) DBG | About to run SSH command:
	I0812 10:39:18.722886   22139 main.go:141] libmachine: (ha-919901-m03) DBG | exit 0
	I0812 10:39:18.853015   22139 main.go:141] libmachine: (ha-919901-m03) DBG | SSH cmd err, output: <nil>: 
	I0812 10:39:18.853304   22139 main.go:141] libmachine: (ha-919901-m03) KVM machine creation complete!
	I0812 10:39:18.853693   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:39:18.854248   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:18.854455   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:18.854659   22139 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 10:39:18.854676   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:39:18.856425   22139 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 10:39:18.856443   22139 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 10:39:18.856456   22139 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 10:39:18.856464   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:18.859008   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.859405   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.859434   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.859574   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:18.859732   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.859882   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.860046   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:18.860210   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:18.860481   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:18.860502   22139 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 10:39:18.968298   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:39:18.968329   22139 main.go:141] libmachine: Detecting the provisioner...
	I0812 10:39:18.968337   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:18.971304   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.971798   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:18.971829   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:18.971981   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:18.972220   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.972450   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:18.972629   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:18.972874   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:18.973052   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:18.973063   22139 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 10:39:19.085740   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 10:39:19.085861   22139 main.go:141] libmachine: found compatible host: buildroot
	I0812 10:39:19.085877   22139 main.go:141] libmachine: Provisioning with buildroot...
	I0812 10:39:19.085888   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.086165   22139 buildroot.go:166] provisioning hostname "ha-919901-m03"
	I0812 10:39:19.086189   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.086402   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.089552   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.089931   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.089960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.090086   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.090280   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.090452   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.090612   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.090783   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.090965   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.090978   22139 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901-m03 && echo "ha-919901-m03" | sudo tee /etc/hostname
	I0812 10:39:19.216661   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901-m03
	
	I0812 10:39:19.216698   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.219545   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.219866   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.219896   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.220055   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.220222   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.220364   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.220509   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.220667   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.220916   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.220938   22139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:39:19.337276   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:39:19.337308   22139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:39:19.337327   22139 buildroot.go:174] setting up certificates
	I0812 10:39:19.337337   22139 provision.go:84] configureAuth start
	I0812 10:39:19.337352   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetMachineName
	I0812 10:39:19.337715   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:19.340775   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.341169   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.341198   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.341393   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.343688   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.344068   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.344098   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.344201   22139 provision.go:143] copyHostCerts
	I0812 10:39:19.344230   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:39:19.344262   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:39:19.344271   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:39:19.344340   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:39:19.344440   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:39:19.344458   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:39:19.344462   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:39:19.344488   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:39:19.344531   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:39:19.344547   22139 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:39:19.344553   22139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:39:19.344572   22139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:39:19.344619   22139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901-m03 san=[127.0.0.1 192.168.39.195 ha-919901-m03 localhost minikube]
	I0812 10:39:19.600625   22139 provision.go:177] copyRemoteCerts
	I0812 10:39:19.600685   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:39:19.600708   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.603841   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.604190   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.604216   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.604411   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.604773   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.605047   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.605222   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:19.691643   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:39:19.691720   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:39:19.715320   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:39:19.715401   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 10:39:19.740178   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:39:19.740252   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:39:19.764374   22139 provision.go:87] duration metric: took 427.021932ms to configureAuth
	I0812 10:39:19.764400   22139 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:39:19.764648   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:19.764731   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:19.767376   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.767877   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:19.767909   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:19.768130   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:19.768369   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.768531   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:19.768746   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:19.768961   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:19.769167   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:19.769188   22139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:39:20.033806   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:39:20.033838   22139 main.go:141] libmachine: Checking connection to Docker...
	I0812 10:39:20.033847   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetURL
	I0812 10:39:20.035217   22139 main.go:141] libmachine: (ha-919901-m03) DBG | Using libvirt version 6000000
	I0812 10:39:20.037589   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.037945   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.037973   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.038159   22139 main.go:141] libmachine: Docker is up and running!
	I0812 10:39:20.038177   22139 main.go:141] libmachine: Reticulating splines...
	I0812 10:39:20.038184   22139 client.go:171] duration metric: took 26.277750614s to LocalClient.Create
	I0812 10:39:20.038211   22139 start.go:167] duration metric: took 26.277813055s to libmachine.API.Create "ha-919901"
	I0812 10:39:20.038220   22139 start.go:293] postStartSetup for "ha-919901-m03" (driver="kvm2")
	I0812 10:39:20.038230   22139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:39:20.038245   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.038480   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:39:20.038506   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.040937   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.041236   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.041265   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.041434   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.041633   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.041805   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.041959   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.131924   22139 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:39:20.136138   22139 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:39:20.136162   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:39:20.136226   22139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:39:20.136293   22139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:39:20.136306   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:39:20.136393   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:39:20.146030   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:39:20.169471   22139 start.go:296] duration metric: took 131.237417ms for postStartSetup
	I0812 10:39:20.169531   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetConfigRaw
	I0812 10:39:20.170199   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:20.172820   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.173236   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.173263   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.173599   22139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:39:20.173821   22139 start.go:128] duration metric: took 26.432236244s to createHost
	I0812 10:39:20.173854   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.175960   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.176365   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.176408   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.176544   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.176715   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.176874   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.177027   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.177178   22139 main.go:141] libmachine: Using SSH client type: native
	I0812 10:39:20.177332   22139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0812 10:39:20.177342   22139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:39:20.293681   22139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459160.273858948
	
	I0812 10:39:20.293710   22139 fix.go:216] guest clock: 1723459160.273858948
	I0812 10:39:20.293720   22139 fix.go:229] Guest: 2024-08-12 10:39:20.273858948 +0000 UTC Remote: 2024-08-12 10:39:20.173842555 +0000 UTC m=+163.958020195 (delta=100.016393ms)
	I0812 10:39:20.293742   22139 fix.go:200] guest clock delta is within tolerance: 100.016393ms
	I0812 10:39:20.293750   22139 start.go:83] releasing machines lock for "ha-919901-m03", held for 26.552323997s
	I0812 10:39:20.293775   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.294056   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:20.296860   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.297227   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.297264   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.299508   22139 out.go:177] * Found network options:
	I0812 10:39:20.300819   22139 out.go:177]   - NO_PROXY=192.168.39.5,192.168.39.139
	W0812 10:39:20.302196   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 10:39:20.302219   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:39:20.302233   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.302856   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.303071   22139 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:39:20.303173   22139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:39:20.303212   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	W0812 10:39:20.303256   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 10:39:20.303280   22139 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 10:39:20.303402   22139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:39:20.303425   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:39:20.306293   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306503   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306714   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.306742   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.306859   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.306968   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:20.306991   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:20.307040   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.307177   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:39:20.307196   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.307329   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:39:20.307385   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.307446   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:39:20.307581   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:39:20.548206   22139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:39:20.555158   22139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:39:20.555236   22139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:39:20.571703   22139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 10:39:20.571733   22139 start.go:495] detecting cgroup driver to use...
	I0812 10:39:20.571791   22139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:39:20.589054   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:39:20.603071   22139 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:39:20.603140   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:39:20.616927   22139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:39:20.630567   22139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:39:20.751978   22139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:39:20.915733   22139 docker.go:233] disabling docker service ...
	I0812 10:39:20.915796   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:39:20.932763   22139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:39:20.946267   22139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:39:21.059648   22139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:39:21.173353   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:39:21.188021   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:39:21.206027   22139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:39:21.206094   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.216780   22139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:39:21.216837   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.226789   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.236799   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.247259   22139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:39:21.257537   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.269428   22139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.285727   22139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:39:21.295562   22139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:39:21.304501   22139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 10:39:21.304551   22139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 10:39:21.317231   22139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:39:21.326612   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:21.454574   22139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:39:21.610379   22139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:39:21.610472   22139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:39:21.615359   22139 start.go:563] Will wait 60s for crictl version
	I0812 10:39:21.615424   22139 ssh_runner.go:195] Run: which crictl
	I0812 10:39:21.619180   22139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:39:21.661781   22139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:39:21.661873   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:39:21.692811   22139 ssh_runner.go:195] Run: crio --version
	I0812 10:39:21.724072   22139 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:39:21.725652   22139 out.go:177]   - env NO_PROXY=192.168.39.5
	I0812 10:39:21.727085   22139 out.go:177]   - env NO_PROXY=192.168.39.5,192.168.39.139
	I0812 10:39:21.728231   22139 main.go:141] libmachine: (ha-919901-m03) Calling .GetIP
	I0812 10:39:21.731239   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:21.731608   22139 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:39:21.731632   22139 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:39:21.731882   22139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:39:21.736056   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:39:21.749288   22139 mustload.go:65] Loading cluster: ha-919901
	I0812 10:39:21.749598   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:21.749928   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:21.749967   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:21.765319   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0812 10:39:21.765738   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:21.766171   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:21.766192   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:21.766505   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:21.766724   22139 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:39:21.768368   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:39:21.768657   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:21.768689   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:21.783620   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0812 10:39:21.784033   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:21.784486   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:21.784520   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:21.784825   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:21.785024   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:39:21.785254   22139 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.195
	I0812 10:39:21.785268   22139 certs.go:194] generating shared ca certs ...
	I0812 10:39:21.785282   22139 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.785451   22139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:39:21.785491   22139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:39:21.785502   22139 certs.go:256] generating profile certs ...
	I0812 10:39:21.785585   22139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:39:21.785612   22139 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e
	I0812 10:39:21.785634   22139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.195 192.168.39.254]
	I0812 10:39:21.949137   22139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e ...
	I0812 10:39:21.949173   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e: {Name:mk5171e305f991d45c655793a063dad5dfd92062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.949359   22139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e ...
	I0812 10:39:21.949377   22139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e: {Name:mk6d344a5c88c0ce65418b3d5eadf67a5c800f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:39:21.949481   22139 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.bc71961e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:39:21.949636   22139 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.bc71961e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:39:21.949790   22139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:39:21.949808   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:39:21.949827   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:39:21.949847   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:39:21.949866   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:39:21.949885   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:39:21.949903   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:39:21.949921   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:39:21.949938   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:39:21.949997   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:39:21.950036   22139 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:39:21.950050   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:39:21.950083   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:39:21.950115   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:39:21.950146   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:39:21.950198   22139 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:39:21.950234   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:21.950254   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:39:21.950272   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:39:21.950312   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:39:21.953769   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:21.954394   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:39:21.954416   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:21.954692   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:39:21.954903   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:39:21.955062   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:39:21.955240   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:39:22.029272   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 10:39:22.035774   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 10:39:22.047549   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 10:39:22.051516   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0812 10:39:22.062049   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 10:39:22.066010   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 10:39:22.076435   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 10:39:22.080674   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 10:39:22.093101   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 10:39:22.097110   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 10:39:22.107954   22139 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 10:39:22.111581   22139 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 10:39:22.122165   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:39:22.145850   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:39:22.167788   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:39:22.191295   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:39:22.217242   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0812 10:39:22.240482   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:39:22.264083   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:39:22.287415   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:39:22.311289   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:39:22.334555   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:39:22.356979   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:39:22.379881   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 10:39:22.396722   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0812 10:39:22.414597   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 10:39:22.431326   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 10:39:22.449267   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 10:39:22.465456   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 10:39:22.481885   22139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 10:39:22.497980   22139 ssh_runner.go:195] Run: openssl version
	I0812 10:39:22.503469   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:39:22.514150   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.518570   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.518619   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:39:22.524075   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:39:22.534675   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:39:22.545520   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.549823   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.549879   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:39:22.555414   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:39:22.566319   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:39:22.576970   22139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.581430   22139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.581501   22139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:39:22.587536   22139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:39:22.598543   22139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:39:22.602642   22139 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 10:39:22.602709   22139 kubeadm.go:934] updating node {m03 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0812 10:39:22.602788   22139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:39:22.602814   22139 kube-vip.go:115] generating kube-vip config ...
	I0812 10:39:22.602851   22139 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:39:22.619658   22139 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:39:22.619739   22139 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:39:22.619808   22139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:39:22.629510   22139 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 10:39:22.629588   22139 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 10:39:22.638674   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0812 10:39:22.638706   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 10:39:22.638723   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:39:22.638728   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:39:22.638674   22139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0812 10:39:22.638784   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 10:39:22.638787   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:39:22.638864   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 10:39:22.656137   22139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:39:22.656203   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 10:39:22.656245   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 10:39:22.656266   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 10:39:22.656297   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 10:39:22.656247   22139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 10:39:22.681531   22139 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 10:39:22.681580   22139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 10:39:23.554251   22139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 10:39:23.564465   22139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 10:39:23.583010   22139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:39:23.600680   22139 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:39:23.618445   22139 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:39:23.622366   22139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 10:39:23.634628   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:23.753923   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:39:23.770529   22139 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:39:23.770918   22139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:39:23.770966   22139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:39:23.789842   22139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0812 10:39:23.790324   22139 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:39:23.790831   22139 main.go:141] libmachine: Using API Version  1
	I0812 10:39:23.790854   22139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:39:23.791214   22139 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:39:23.791426   22139 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:39:23.791569   22139 start.go:317] joinCluster: &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:39:23.791689   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 10:39:23.791707   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:39:23.794805   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:23.795259   22139 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:39:23.795296   22139 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:39:23.795403   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:39:23.795640   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:39:23.795826   22139 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:39:23.795980   22139 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:39:23.966445   22139 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:39:23.966482   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f9003j.6i2ogw8a6w17yk3t --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m03 --control-plane --apiserver-advertise-address=192.168.39.195 --apiserver-bind-port=8443"
	I0812 10:39:47.821276   22139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f9003j.6i2ogw8a6w17yk3t --discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-919901-m03 --control-plane --apiserver-advertise-address=192.168.39.195 --apiserver-bind-port=8443": (23.85475962s)
	I0812 10:39:47.821324   22139 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 10:39:48.432646   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-919901-m03 minikube.k8s.io/updated_at=2024_08_12T10_39_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=ha-919901 minikube.k8s.io/primary=false
	I0812 10:39:48.559096   22139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-919901-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 10:39:48.681854   22139 start.go:319] duration metric: took 24.890280586s to joinCluster
	I0812 10:39:48.681992   22139 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 10:39:48.682338   22139 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:39:48.683772   22139 out.go:177] * Verifying Kubernetes components...
	I0812 10:39:48.685350   22139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:39:48.974620   22139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:39:49.044155   22139 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:39:49.044439   22139 kapi.go:59] client config for ha-919901: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 10:39:49.044496   22139 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0812 10:39:49.044728   22139 node_ready.go:35] waiting up to 6m0s for node "ha-919901-m03" to be "Ready" ...
	I0812 10:39:49.044811   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:49.044822   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:49.044832   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:49.044838   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:49.048172   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:49.545024   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:49.545045   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:49.545054   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:49.545061   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:49.553804   22139 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0812 10:39:50.045979   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:50.046020   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:50.046033   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:50.046044   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:50.050363   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:39:50.545032   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:50.545051   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:50.545060   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:50.545064   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:50.554965   22139 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 10:39:51.045860   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:51.045881   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:51.045890   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:51.045896   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:51.049642   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:51.050320   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:51.545456   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:51.545482   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:51.545493   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:51.545499   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:51.549297   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:52.044953   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:52.044981   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:52.045006   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:52.045014   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:52.048263   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:52.545777   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:52.545795   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:52.545803   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:52.545808   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:52.549410   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.045058   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:53.045081   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:53.045089   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:53.045092   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:53.048507   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.545314   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:53.545353   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:53.545362   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:53.545367   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:53.549047   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:53.549963   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:54.045209   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:54.045233   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:54.045243   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:54.045248   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:54.048625   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:54.545642   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:54.545677   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:54.545689   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:54.545696   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:54.549691   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:55.045500   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:55.045521   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:55.045529   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:55.045533   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:55.049104   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:55.545128   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:55.545158   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:55.545167   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:55.545174   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:55.631274   22139 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0812 10:39:55.632236   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:56.045539   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:56.045566   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:56.045578   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:56.045585   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:56.048857   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:56.545777   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:56.545802   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:56.545814   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:56.545820   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:56.549521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:57.045521   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:57.045544   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:57.045552   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:57.045556   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:57.049336   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:57.545823   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:57.545848   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:57.545860   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:57.545866   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:57.549847   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:58.045617   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:58.045641   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:58.045649   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:58.045654   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:58.049059   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:58.049903   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:39:58.545128   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:58.545150   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:58.545161   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:58.545167   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:58.548940   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:59.045945   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:59.045976   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:59.045984   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:59.045991   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:59.049272   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:39:59.545049   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:39:59.545074   22139 round_trippers.go:469] Request Headers:
	I0812 10:39:59.545081   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:39:59.545085   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:39:59.548633   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.045573   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:00.045597   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:00.045608   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:00.045614   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:00.048944   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.544947   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:00.544972   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:00.544988   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:00.544995   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:00.548418   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:00.549075   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:01.045466   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:01.045509   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:01.045520   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:01.045527   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:01.049225   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:01.545827   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:01.545850   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:01.545861   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:01.545866   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:01.549774   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.045839   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:02.045862   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:02.045870   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:02.045873   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:02.049216   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.545047   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:02.545081   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:02.545089   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:02.545093   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:02.548701   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:02.549430   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:03.045819   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:03.045842   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:03.045848   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:03.045853   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:03.049420   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:03.545321   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:03.545343   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:03.545353   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:03.545358   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:03.548983   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.045753   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:04.045775   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:04.045783   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:04.045786   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:04.048983   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.545909   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:04.545938   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:04.545948   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:04.545952   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:04.549225   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:04.549750   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:05.045124   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:05.045146   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:05.045153   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:05.045157   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:05.048385   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:05.545052   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:05.545077   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:05.545088   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:05.545095   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:05.549732   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:40:06.045845   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:06.045867   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:06.045878   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:06.045883   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:06.049809   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:06.545235   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:06.545279   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:06.545289   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:06.545293   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:06.548818   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.045650   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.045684   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.045694   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.045704   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.049409   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.050589   22139 node_ready.go:53] node "ha-919901-m03" has status "Ready":"False"
	I0812 10:40:07.545015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.545051   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.545059   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.545063   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.548434   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.549092   22139 node_ready.go:49] node "ha-919901-m03" has status "Ready":"True"
	I0812 10:40:07.549116   22139 node_ready.go:38] duration metric: took 18.504372406s for node "ha-919901-m03" to be "Ready" ...
	I0812 10:40:07.549129   22139 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:40:07.549191   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:07.549200   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.549207   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.549211   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.556054   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:07.562760   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.562865   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rc7cl
	I0812 10:40:07.562874   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.562882   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.562886   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.566516   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.567337   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.567352   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.567359   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.567364   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.570320   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.570849   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.570868   22139 pod_ready.go:81] duration metric: took 8.078681ms for pod "coredns-7db6d8ff4d-rc7cl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.570880   22139 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.570940   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wstd4
	I0812 10:40:07.570950   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.570959   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.570967   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.573966   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.574787   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.574803   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.574810   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.574814   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.577707   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.578355   22139 pod_ready.go:92] pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.578375   22139 pod_ready.go:81] duration metric: took 7.487916ms for pod "coredns-7db6d8ff4d-wstd4" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.578386   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.578458   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901
	I0812 10:40:07.578469   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.578476   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.578480   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.581268   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.581792   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:07.581806   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.581812   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.581816   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.584654   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.585253   22139 pod_ready.go:92] pod "etcd-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.585273   22139 pod_ready.go:81] duration metric: took 6.878189ms for pod "etcd-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.585287   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.585354   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m02
	I0812 10:40:07.585363   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.585373   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.585381   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.588128   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:07.588717   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:07.588731   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.588738   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.588741   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.591951   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.592782   22139 pod_ready.go:92] pod "etcd-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.592805   22139 pod_ready.go:81] duration metric: took 7.50856ms for pod "etcd-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.592818   22139 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.745151   22139 request.go:629] Waited for 152.258306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m03
	I0812 10:40:07.745239   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-919901-m03
	I0812 10:40:07.745250   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.745257   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.745266   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.748628   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.945521   22139 request.go:629] Waited for 196.390149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.945612   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:07.945635   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:07.945647   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:07.945662   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:07.949009   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:07.949668   22139 pod_ready.go:92] pod "etcd-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:07.949688   22139 pod_ready.go:81] duration metric: took 356.862793ms for pod "etcd-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:07.949709   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.145413   22139 request.go:629] Waited for 195.623441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:40:08.145470   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901
	I0812 10:40:08.145475   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.145482   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.145487   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.148840   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.346104   22139 request.go:629] Waited for 196.419769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:08.346157   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:08.346162   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.346169   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.346172   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.349269   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.349915   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:08.349934   22139 pod_ready.go:81] duration metric: took 400.217619ms for pod "kube-apiserver-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.349962   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.545517   22139 request.go:629] Waited for 195.481494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:40:08.545601   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m02
	I0812 10:40:08.545607   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.545615   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.545622   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.549619   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.745193   22139 request.go:629] Waited for 194.311263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:08.745273   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:08.745281   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.745315   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.745321   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.748900   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:08.749608   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:08.749629   22139 pod_ready.go:81] duration metric: took 399.659166ms for pod "kube-apiserver-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.749639   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:08.945644   22139 request.go:629] Waited for 195.924629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m03
	I0812 10:40:08.945702   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901-m03
	I0812 10:40:08.945708   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:08.945717   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:08.945722   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:08.949521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.145627   22139 request.go:629] Waited for 195.367609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:09.145703   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:09.145710   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.145721   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.145727   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.149187   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.149675   22139 pod_ready.go:92] pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.149692   22139 pod_ready.go:81] duration metric: took 400.047769ms for pod "kube-apiserver-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.149701   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.345854   22139 request.go:629] Waited for 196.064636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:40:09.345913   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901
	I0812 10:40:09.345918   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.345925   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.345930   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.349312   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.545325   22139 request.go:629] Waited for 195.308979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:09.545400   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:09.545407   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.545418   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.545423   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.548980   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.549779   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.549798   22139 pod_ready.go:81] duration metric: took 400.090053ms for pod "kube-controller-manager-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.549808   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.746018   22139 request.go:629] Waited for 196.147849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:40:09.746105   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m02
	I0812 10:40:09.746115   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.746125   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.746137   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.749873   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.946023   22139 request.go:629] Waited for 195.321492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:09.946092   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:09.946099   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:09.946109   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:09.946115   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:09.949468   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:09.950018   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:09.950040   22139 pod_ready.go:81] duration metric: took 400.223629ms for pod "kube-controller-manager-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:09.950051   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.146046   22139 request.go:629] Waited for 195.931355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m03
	I0812 10:40:10.146109   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901-m03
	I0812 10:40:10.146114   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.146122   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.146127   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.149521   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.345712   22139 request.go:629] Waited for 195.387623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.345789   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.345795   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.345803   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.345811   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.349722   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.350685   22139 pod_ready.go:92] pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:10.350710   22139 pod_ready.go:81] duration metric: took 400.651599ms for pod "kube-controller-manager-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.350725   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xqjr" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.545742   22139 request.go:629] Waited for 194.940464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xqjr
	I0812 10:40:10.545805   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xqjr
	I0812 10:40:10.545811   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.545818   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.545822   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.549599   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.745644   22139 request.go:629] Waited for 195.345272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.745715   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:10.745720   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.745727   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.745730   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.749381   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:10.749899   22139 pod_ready.go:92] pod "kube-proxy-6xqjr" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:10.749916   22139 pod_ready.go:81] duration metric: took 399.184059ms for pod "kube-proxy-6xqjr" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.749926   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:10.946044   22139 request.go:629] Waited for 196.056707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:40:10.946111   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cczfj
	I0812 10:40:10.946117   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:10.946129   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:10.946137   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:10.949676   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.145879   22139 request.go:629] Waited for 195.384898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:11.145967   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:11.145978   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.145985   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.145988   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.149064   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.149663   22139 pod_ready.go:92] pod "kube-proxy-cczfj" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.149680   22139 pod_ready.go:81] duration metric: took 399.748449ms for pod "kube-proxy-cczfj" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.149689   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.345050   22139 request.go:629] Waited for 195.276304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:40:11.345120   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl
	I0812 10:40:11.345126   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.345134   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.345141   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.348419   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.545437   22139 request.go:629] Waited for 196.290149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.545494   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.545498   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.545506   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.545510   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.548860   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.549308   22139 pod_ready.go:92] pod "kube-proxy-ftvfl" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.549326   22139 pod_ready.go:81] duration metric: took 399.631439ms for pod "kube-proxy-ftvfl" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.549335   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.745434   22139 request.go:629] Waited for 196.031432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:40:11.745507   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901
	I0812 10:40:11.745512   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.745519   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.745533   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.749044   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:11.945915   22139 request.go:629] Waited for 196.056401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.946015   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901
	I0812 10:40:11.946028   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:11.946039   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:11.946047   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:11.949046   22139 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 10:40:11.949770   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:11.949786   22139 pod_ready.go:81] duration metric: took 400.445415ms for pod "kube-scheduler-ha-919901" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:11.949795   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.145772   22139 request.go:629] Waited for 195.913279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:40:12.145883   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m02
	I0812 10:40:12.145893   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.145902   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.145913   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.149669   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.345718   22139 request.go:629] Waited for 195.386055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:12.345836   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m02
	I0812 10:40:12.345858   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.345870   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.345879   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.349428   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.349955   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:12.349973   22139 pod_ready.go:81] duration metric: took 400.172097ms for pod "kube-scheduler-ha-919901-m02" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.349983   22139 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.545083   22139 request.go:629] Waited for 195.036653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m03
	I0812 10:40:12.545173   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-919901-m03
	I0812 10:40:12.545185   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.545196   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.545201   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.548690   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.745765   22139 request.go:629] Waited for 196.391035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:12.745846   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-919901-m03
	I0812 10:40:12.745857   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.745864   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.745868   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.749373   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:12.750288   22139 pod_ready.go:92] pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 10:40:12.750312   22139 pod_ready.go:81] duration metric: took 400.323333ms for pod "kube-scheduler-ha-919901-m03" in "kube-system" namespace to be "Ready" ...
	I0812 10:40:12.750323   22139 pod_ready.go:38] duration metric: took 5.201181989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:40:12.750354   22139 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:40:12.750463   22139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:40:12.767642   22139 api_server.go:72] duration metric: took 24.085611745s to wait for apiserver process to appear ...
	I0812 10:40:12.767674   22139 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:40:12.767702   22139 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0812 10:40:12.774553   22139 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0812 10:40:12.774683   22139 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0812 10:40:12.774697   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.774706   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.774714   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.775702   22139 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0812 10:40:12.775772   22139 api_server.go:141] control plane version: v1.30.3
	I0812 10:40:12.775789   22139 api_server.go:131] duration metric: took 8.106849ms to wait for apiserver health ...
	I0812 10:40:12.775802   22139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:40:12.946064   22139 request.go:629] Waited for 170.185941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:12.946156   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:12.946163   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:12.946173   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:12.946180   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:12.952972   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:12.959365   22139 system_pods.go:59] 24 kube-system pods found
	I0812 10:40:12.959414   22139 system_pods.go:61] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:40:12.959422   22139 system_pods.go:61] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:40:12.959427   22139 system_pods.go:61] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:40:12.959432   22139 system_pods.go:61] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:40:12.959437   22139 system_pods.go:61] "etcd-ha-919901-m03" [499957e0-c2b4-4a3c-9e52-933153a1c27e] Running
	I0812 10:40:12.959443   22139 system_pods.go:61] "kindnet-6v7rs" [43c3bf93-f498-4ea3-b494-a1f06e64e2d2] Running
	I0812 10:40:12.959447   22139 system_pods.go:61] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:40:12.959453   22139 system_pods.go:61] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:40:12.959458   22139 system_pods.go:61] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:40:12.959463   22139 system_pods.go:61] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:40:12.959476   22139 system_pods.go:61] "kube-apiserver-ha-919901-m03" [1c13201f-27e2-4987-bfc9-1c25f8e447bd] Running
	I0812 10:40:12.959481   22139 system_pods.go:61] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:40:12.959490   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:40:12.959496   22139 system_pods.go:61] "kube-controller-manager-ha-919901-m03" [ef3b4e77-df48-48c0-a4b2-e9a1f1e64f70] Running
	I0812 10:40:12.959505   22139 system_pods.go:61] "kube-proxy-6xqjr" [013061ce-22f2-4c9c-991e-9a911c914ca4] Running
	I0812 10:40:12.959515   22139 system_pods.go:61] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:40:12.959520   22139 system_pods.go:61] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:40:12.959528   22139 system_pods.go:61] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:40:12.959533   22139 system_pods.go:61] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:40:12.959540   22139 system_pods.go:61] "kube-scheduler-ha-919901-m03" [712b2426-78f2-4560-a7a8-7af53da3c627] Running
	I0812 10:40:12.959546   22139 system_pods.go:61] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:40:12.959554   22139 system_pods.go:61] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:40:12.959561   22139 system_pods.go:61] "kube-vip-ha-919901-m03" [2e37e0c0-dbac-43f1-b7c8-141d6db6c191] Running
	I0812 10:40:12.959566   22139 system_pods.go:61] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:40:12.959575   22139 system_pods.go:74] duration metric: took 183.766982ms to wait for pod list to return data ...
	I0812 10:40:12.959588   22139 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:40:13.145977   22139 request.go:629] Waited for 186.296523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:40:13.146050   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0812 10:40:13.146060   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.146073   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.146083   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.149736   22139 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 10:40:13.149861   22139 default_sa.go:45] found service account: "default"
	I0812 10:40:13.149880   22139 default_sa.go:55] duration metric: took 190.283977ms for default service account to be created ...
	I0812 10:40:13.149890   22139 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:40:13.345342   22139 request.go:629] Waited for 195.382281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:13.345400   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0812 10:40:13.345406   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.345413   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.345418   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.352358   22139 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 10:40:13.358696   22139 system_pods.go:86] 24 kube-system pods found
	I0812 10:40:13.358727   22139 system_pods.go:89] "coredns-7db6d8ff4d-rc7cl" [92f21234-d4e8-4f0e-a8e5-356db2297843] Running
	I0812 10:40:13.358732   22139 system_pods.go:89] "coredns-7db6d8ff4d-wstd4" [53bfc998-8d70-4dc5-b0f9-a78610183a2b] Running
	I0812 10:40:13.358737   22139 system_pods.go:89] "etcd-ha-919901" [a2c1d3fe-ff0a-4239-86b1-fa95100bf490] Running
	I0812 10:40:13.358740   22139 system_pods.go:89] "etcd-ha-919901-m02" [37a916a1-fb7f-4256-9ce9-e77d68b91eec] Running
	I0812 10:40:13.358745   22139 system_pods.go:89] "etcd-ha-919901-m03" [499957e0-c2b4-4a3c-9e52-933153a1c27e] Running
	I0812 10:40:13.358749   22139 system_pods.go:89] "kindnet-6v7rs" [43c3bf93-f498-4ea3-b494-a1f06e64e2d2] Running
	I0812 10:40:13.358753   22139 system_pods.go:89] "kindnet-8cqm5" [ac0a56b9-e7f9-439d-a088-54463e9d41bc] Running
	I0812 10:40:13.358756   22139 system_pods.go:89] "kindnet-k5wz9" [75e585a5-9ab7-4211-8ed0-dc1d21345883] Running
	I0812 10:40:13.358762   22139 system_pods.go:89] "kube-apiserver-ha-919901" [193c8d04-dc77-4004-8000-fd396b727895] Running
	I0812 10:40:13.358766   22139 system_pods.go:89] "kube-apiserver-ha-919901-m02" [58d119c5-c69e-4a89-bab6-18a82f0cfe3f] Running
	I0812 10:40:13.358770   22139 system_pods.go:89] "kube-apiserver-ha-919901-m03" [1c13201f-27e2-4987-bfc9-1c25f8e447bd] Running
	I0812 10:40:13.358774   22139 system_pods.go:89] "kube-controller-manager-ha-919901" [242663e4-854c-4b58-9864-cabeb79577f7] Running
	I0812 10:40:13.358778   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m02" [8036adae-dadc-4dbe-af53-de82cc21d9c2] Running
	I0812 10:40:13.358784   22139 system_pods.go:89] "kube-controller-manager-ha-919901-m03" [ef3b4e77-df48-48c0-a4b2-e9a1f1e64f70] Running
	I0812 10:40:13.358789   22139 system_pods.go:89] "kube-proxy-6xqjr" [013061ce-22f2-4c9c-991e-9a911c914ca4] Running
	I0812 10:40:13.358793   22139 system_pods.go:89] "kube-proxy-cczfj" [711059fc-2c4a-4706-97a5-007be66ecaff] Running
	I0812 10:40:13.358797   22139 system_pods.go:89] "kube-proxy-ftvfl" [7ed243a1-62f6-4ad1-8873-0fbe1756be9e] Running
	I0812 10:40:13.358801   22139 system_pods.go:89] "kube-scheduler-ha-919901" [ec67c1cf-8e1c-4973-8f96-c558fccb26be] Running
	I0812 10:40:13.358804   22139 system_pods.go:89] "kube-scheduler-ha-919901-m02" [8cf797a6-cf19-4653-a998-395260a0ee1a] Running
	I0812 10:40:13.358808   22139 system_pods.go:89] "kube-scheduler-ha-919901-m03" [712b2426-78f2-4560-a7a8-7af53da3c627] Running
	I0812 10:40:13.358812   22139 system_pods.go:89] "kube-vip-ha-919901" [46735446-a563-4870-9509-441ad0cd5c45] Running
	I0812 10:40:13.358815   22139 system_pods.go:89] "kube-vip-ha-919901-m02" [9df99381-0503-4bef-ac63-a06f687d1c1a] Running
	I0812 10:40:13.358818   22139 system_pods.go:89] "kube-vip-ha-919901-m03" [2e37e0c0-dbac-43f1-b7c8-141d6db6c191] Running
	I0812 10:40:13.358822   22139 system_pods.go:89] "storage-provisioner" [6d697e68-33fa-4784-90d8-0561d3fff6a8] Running
	I0812 10:40:13.358827   22139 system_pods.go:126] duration metric: took 208.929081ms to wait for k8s-apps to be running ...
	I0812 10:40:13.358836   22139 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:40:13.358884   22139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:40:13.374275   22139 system_svc.go:56] duration metric: took 15.428513ms WaitForService to wait for kubelet
	I0812 10:40:13.374314   22139 kubeadm.go:582] duration metric: took 24.692286487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:40:13.374354   22139 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:40:13.545990   22139 request.go:629] Waited for 171.54847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0812 10:40:13.546055   22139 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0812 10:40:13.546062   22139 round_trippers.go:469] Request Headers:
	I0812 10:40:13.546073   22139 round_trippers.go:473]     Accept: application/json, */*
	I0812 10:40:13.546081   22139 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 10:40:13.550219   22139 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 10:40:13.551372   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551412   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551437   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551443   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551449   22139 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:40:13.551454   22139 node_conditions.go:123] node cpu capacity is 2
	I0812 10:40:13.551463   22139 node_conditions.go:105] duration metric: took 177.102596ms to run NodePressure ...
	I0812 10:40:13.551483   22139 start.go:241] waiting for startup goroutines ...
	I0812 10:40:13.551512   22139 start.go:255] writing updated cluster config ...
	I0812 10:40:13.551918   22139 ssh_runner.go:195] Run: rm -f paused
	I0812 10:40:13.605291   22139 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 10:40:13.607605   22139 out.go:177] * Done! kubectl is now configured to use "ha-919901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.785168752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459490785142535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d718189d-0992-4442-9279-49ec2114060d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.785890082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42ebe78f-4ee4-4922-8d5c-933890010c96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.785949669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42ebe78f-4ee4-4922-8d5c-933890010c96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.786162582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42ebe78f-4ee4-4922-8d5c-933890010c96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.826629771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eed54e03-1abb-44d8-9cef-96f643533d03 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.826700158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eed54e03-1abb-44d8-9cef-96f643533d03 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.827773879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e78f7156-5712-4930-a576-6b82166ae1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.828211693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459490828189177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e78f7156-5712-4930-a576-6b82166ae1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.828685273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97e6c506-9c35-4b0d-9010-a2048e615a0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.828736608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97e6c506-9c35-4b0d-9010-a2048e615a0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.828975195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97e6c506-9c35-4b0d-9010-a2048e615a0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.869656003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa148f87-2679-4f51-9659-f54a160f9f26 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.869738332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa148f87-2679-4f51-9659-f54a160f9f26 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.871130305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21b38be6-79a7-473d-a6c3-961c684d4d96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.871750749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459490871723813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21b38be6-79a7-473d-a6c3-961c684d4d96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.872388386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64184ca-76fd-4501-8036-319d5b787b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.872449459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64184ca-76fd-4501-8036-319d5b787b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.872667101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64184ca-76fd-4501-8036-319d5b787b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.909973807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9420235f-97f4-4309-b7e6-f778e5c064fc name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.910051948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9420235f-97f4-4309-b7e6-f778e5c064fc name=/runtime.v1.RuntimeService/Version
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.911669611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=220ba9ac-6f36-4f76-b0d0-38449889e238 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.912163388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459490912135328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=220ba9ac-6f36-4f76-b0d0-38449889e238 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.912808038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=963ada6a-d58e-47d6-b09a-ca885f463a50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.912871892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=963ada6a-d58e-47d6-b09a-ca885f463a50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:44:50 ha-919901 crio[680]: time="2024-08-12 10:44:50.913100684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459217675933508,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065193851382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459065148016455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0559eb25599b7a516fc431c43609c49bcf8d4a2d3a121ef0c25beb12c3ae16d,PodSandboxId:da089fb8954d6aad7bc10671ec94fd0050672aa408f2e4a34616fbda29b7753e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723459064778861507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1723459052942829200,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172345904
8117988565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed,PodSandboxId:54a5959bc96a8e32170b615df8c382f8167bfb728ed211773bfe7d2c3147bf04,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17234590309
94071221,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd97a44252f63fcee403b7e2f9c96fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459028074752327,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459028024412622,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f,PodSandboxId:80f8c160f0149309a933338c0effa175e263894a3caa3501b57315b7b3a0fada,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459028017431962,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e,PodSandboxId:f1ce2bfb06df99d082f44d577edbb34634858412901a7fc407f11eb1ec217ccf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459027942776933,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=963ada6a-d58e-47d6-b09a-ca885f463a50 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8542d2fe34f2b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   40dfaa461230a       busybox-fc5497c4f-pj8gg
	6d0c6b246369b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7ee3eb4b0b10e       coredns-7db6d8ff4d-wstd4
	ec7364f484b0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   a88f690225d3f       coredns-7db6d8ff4d-rc7cl
	f0559eb25599b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   da089fb8954d6       storage-provisioner
	4d3c2394cc8cd       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    7 minutes ago       Running             kindnet-cni               0                   2abd5fefba6f3       kindnet-k5wz9
	7cd3e13fb2b3b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   b7d28551c45a6       kube-proxy-ftvfl
	52237e0a859ca       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   54a5959bc96a8       kube-vip-ha-919901
	2af78571207ce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   06243d97384e5       kube-scheduler-ha-919901
	0c30877cfdcca       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   fae04d253fe0c       etcd-ha-919901
	2b624c8fe2100       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   80f8c160f0149       kube-apiserver-ha-919901
	e76a506154546       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   f1ce2bfb06df9       kube-controller-manager-ha-919901
	
	
	==> coredns [6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8] <==
	[INFO] 10.244.0.4:56545 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000091906s
	[INFO] 10.244.0.4:43928 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000079555s
	[INFO] 10.244.2.2:33666 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000141234s
	[INFO] 10.244.2.2:40403 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000077505s
	[INFO] 10.244.2.2:60651 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001944453s
	[INFO] 10.244.1.2:41656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234118s
	[INFO] 10.244.1.2:37332 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027744s
	[INFO] 10.244.1.2:40223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010736666s
	[INFO] 10.244.0.4:34313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099644s
	[INFO] 10.244.0.4:42226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013952s
	[INFO] 10.244.0.4:57222 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017573s
	[INFO] 10.244.0.4:58894 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088282s
	[INFO] 10.244.2.2:46163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143718s
	[INFO] 10.244.2.2:51332 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158612s
	[INFO] 10.244.2.2:38508 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102467s
	[INFO] 10.244.1.2:36638 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127128s
	[INFO] 10.244.1.2:48634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196174s
	[INFO] 10.244.1.2:34717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153611s
	[INFO] 10.244.1.2:59132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121069s
	[INFO] 10.244.0.4:52263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018165s
	[INFO] 10.244.0.4:33949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137401s
	[INFO] 10.244.0.4:50775 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059871s
	[INFO] 10.244.2.2:49015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152696s
	[INFO] 10.244.2.2:39997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159415s
	[INFO] 10.244.2.2:33769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094484s
	
	
	==> coredns [ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b] <==
	[INFO] 10.244.1.2:40066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158597s
	[INFO] 10.244.1.2:59324 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176108s
	[INFO] 10.244.0.4:36927 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001973861s
	[INFO] 10.244.0.4:39495 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244693s
	[INFO] 10.244.0.4:42312 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071889s
	[INFO] 10.244.0.4:36852 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079487s
	[INFO] 10.244.2.2:51413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945024s
	[INFO] 10.244.2.2:47991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079163s
	[INFO] 10.244.2.2:37019 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001502663s
	[INFO] 10.244.2.2:54793 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077144s
	[INFO] 10.244.2.2:58782 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056455s
	[INFO] 10.244.1.2:54292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137507s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089729s
	[INFO] 10.244.0.4:40377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115376s
	[INFO] 10.244.0.4:56017 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088959s
	[INFO] 10.244.0.4:52411 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057997s
	[INFO] 10.244.0.4:46999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005214s
	[INFO] 10.244.2.2:42855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167607s
	[INFO] 10.244.2.2:43154 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117622s
	[INFO] 10.244.2.2:33056 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087079s
	[INFO] 10.244.2.2:52436 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114815s
	[INFO] 10.244.1.2:57727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129686s
	[INFO] 10.244.1.2:60878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018786s
	[INFO] 10.244.0.4:47644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114448s
	[INFO] 10.244.2.2:38930 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159722s
	
	
	==> describe nodes <==
	Name:               ha-919901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:40:47 +0000   Mon, 12 Aug 2024 10:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-919901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0604b91ac2ed4dfdb4f1eba3f89f2634
	  System UUID:                0604b91a-c2ed-4dfd-b4f1-eba3f89f2634
	  Boot ID:                    e69dd59d-8862-4943-a8be-e27de6624ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pj8gg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-7db6d8ff4d-rc7cl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m24s
	  kube-system                 coredns-7db6d8ff4d-wstd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m24s
	  kube-system                 etcd-ha-919901                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m37s
	  kube-system                 kindnet-k5wz9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m24s
	  kube-system                 kube-apiserver-ha-919901             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-controller-manager-ha-919901    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-proxy-ftvfl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-scheduler-ha-919901             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-vip-ha-919901                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m22s  kube-proxy       
	  Normal  Starting                 7m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m37s  kubelet          Node ha-919901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s  kubelet          Node ha-919901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s  kubelet          Node ha-919901 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m25s  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal  NodeReady                7m7s   kubelet          Node ha-919901 status is now: NodeReady
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	
	
	Name:               ha-919901-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:38:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:41:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 10:40:31 +0000   Mon, 12 Aug 2024 10:42:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-919901-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2d78288ee7d4cf8b54a7dd9f4bdd0a2
	  System UUID:                b2d78288-ee7d-4cf8-b54a-7dd9f4bdd0a2
	  Boot ID:                    fc484ec8-2cf0-4341-b6f0-32aea18b1ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-46rph                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-919901-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m21s
	  kube-system                 kindnet-8cqm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ha-919901-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-919901-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-cczfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ha-919901-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-919901-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  NodeNotReady             2m48s                  node-controller  Node ha-919901-m02 status is now: NodeNotReady
	
	
	Name:               ha-919901-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_39_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:39:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:44:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:40:46 +0000   Mon, 12 Aug 2024 10:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-919901-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 018b12c9070f4bf48440eace9c0062df
	  System UUID:                018b12c9-070f-4bf4-8440-eace9c0062df
	  Boot ID:                    e9258875-f780-4a62-84da-f7421903e7ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v6ddx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-919901-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-6v7rs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m6s
	  kube-system                 kube-apiserver-ha-919901-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-919901-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-6xqjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-ha-919901-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-919901-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-919901-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	
	
	Name:               ha-919901-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_40_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:40:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:40:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:41:19 +0000   Mon, 12 Aug 2024 10:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    ha-919901-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9924b3342904c65bcf17b38012b444a
	  System UUID:                d9924b33-4290-4c65-bcf1-7b38012b444a
	  Boot ID:                    04e52e72-fe17-4416-bddf-da5e40736490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-clr9b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-proxy-2h4vt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m3s (x2 over 4m3s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x2 over 4m3s)  kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x2 over 4m3s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal  NodeReady                3m44s                kubelet          Node ha-919901-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug12 10:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050882] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.740086] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.846102] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.484807] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.272888] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.064986] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049228] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.190717] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.120674] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.278615] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Aug12 10:37] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +3.648433] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060066] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249848] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.088679] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.931862] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.868842] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 10:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14] <==
	{"level":"warn","ts":"2024-08-12T10:44:51.17046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.175331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.192309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.203531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.225478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.230471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.253508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.2645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.27666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.290816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.292513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.297343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.308974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.313816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.318948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.32867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.335149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.341804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.346349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.349753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.356086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.36225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.36976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.392338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:44:51.424935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:44:51 up 8 min,  0 users,  load average: 0.21, 0.38, 0.24
	Linux ha-919901 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf] <==
	I0812 10:44:13.955399       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:44:23.960082       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:44:23.960131       1 main.go:299] handling current node
	I0812 10:44:23.960149       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:44:23.960158       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:44:23.960359       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:44:23.960412       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:44:23.960478       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:44:23.960496       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:44:33.952331       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:44:33.952371       1 main.go:299] handling current node
	I0812 10:44:33.952417       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:44:33.952424       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:44:33.952572       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:44:33.952592       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:44:33.952683       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:44:33.952698       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:44:43.951599       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:44:43.951784       1 main.go:299] handling current node
	I0812 10:44:43.951816       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:44:43.951875       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:44:43.952187       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:44:43.952285       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:44:43.952490       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:44:43.952515       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f] <==
	I0812 10:37:13.160923       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0812 10:37:13.174462       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5]
	I0812 10:37:13.176611       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 10:37:13.181941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 10:37:13.260864       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 10:37:14.337272       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 10:37:14.360762       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0812 10:37:14.504891       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 10:37:26.787949       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0812 10:37:27.466488       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0812 10:40:18.956949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44182: use of closed network connection
	E0812 10:40:19.142427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44196: use of closed network connection
	E0812 10:40:19.346412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44216: use of closed network connection
	E0812 10:40:19.541746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44224: use of closed network connection
	E0812 10:40:19.719361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44238: use of closed network connection
	E0812 10:40:19.904586       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44244: use of closed network connection
	E0812 10:40:20.086113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44274: use of closed network connection
	E0812 10:40:20.278779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44292: use of closed network connection
	E0812 10:40:20.460778       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44310: use of closed network connection
	E0812 10:40:20.761979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46406: use of closed network connection
	E0812 10:40:20.936139       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46428: use of closed network connection
	E0812 10:40:21.162853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46438: use of closed network connection
	E0812 10:40:21.350699       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46458: use of closed network connection
	E0812 10:40:21.531307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46462: use of closed network connection
	E0812 10:40:21.716849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46490: use of closed network connection
	
	
	==> kube-controller-manager [e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e] <==
	I0812 10:39:45.372434       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-919901-m03" podCIDRs=["10.244.2.0/24"]
	I0812 10:39:46.766154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m03"
	I0812 10:40:14.595729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.558106ms"
	I0812 10:40:14.727932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.019982ms"
	I0812 10:40:14.902293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="173.295981ms"
	I0812 10:40:15.008810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.469116ms"
	E0812 10:40:15.008860       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:40:15.009076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="138.079µs"
	I0812 10:40:15.016147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.267µs"
	I0812 10:40:15.282291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.327µs"
	I0812 10:40:18.258747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.598612ms"
	I0812 10:40:18.259274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.915µs"
	I0812 10:40:18.291900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.247096ms"
	I0812 10:40:18.293628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.082µs"
	I0812 10:40:18.495732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.563624ms"
	I0812 10:40:18.496958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.896µs"
	E0812 10:40:48.092722       1 certificate_controller.go:146] Sync csr-cvlct failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-cvlct": the object has been modified; please apply your changes to the latest version and try again
	E0812 10:40:48.102197       1 certificate_controller.go:146] Sync csr-cvlct failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-cvlct": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:40:48.366957       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-919901-m04\" does not exist"
	I0812 10:40:48.414765       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-919901-m04" podCIDRs=["10.244.3.0/24"]
	I0812 10:40:51.870064       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m04"
	I0812 10:41:07.861699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-919901-m04"
	I0812 10:42:03.832401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-919901-m04"
	I0812 10:42:03.879343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.03547ms"
	I0812 10:42:03.880483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.255µs"
	
	
	==> kube-proxy [7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f] <==
	I0812 10:37:28.448360       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:37:28.490783       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0812 10:37:28.537171       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:37:28.537271       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:37:28.537290       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:37:28.541575       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:37:28.542279       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:37:28.542307       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:37:28.546922       1 config.go:192] "Starting service config controller"
	I0812 10:37:28.546997       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:37:28.547176       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:37:28.547313       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:37:28.548759       1 config.go:319] "Starting node config controller"
	I0812 10:37:28.548785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:37:28.648203       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 10:37:28.648337       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:37:28.649030       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf] <==
	E0812 10:37:12.736146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0812 10:37:14.999883       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 10:39:45.445909       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6xqjr\": pod kube-proxy-6xqjr is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6xqjr" node="ha-919901-m03"
	E0812 10:39:45.446133       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6xqjr\": pod kube-proxy-6xqjr is already assigned to node \"ha-919901-m03\"" pod="kube-system/kube-proxy-6xqjr"
	I0812 10:39:45.446184       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6xqjr" node="ha-919901-m03"
	E0812 10:39:45.446998       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6v7rs\": pod kindnet-6v7rs is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-6v7rs" node="ha-919901-m03"
	E0812 10:39:45.447058       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 43c3bf93-f498-4ea3-b494-a1f06e64e2d2(kube-system/kindnet-6v7rs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6v7rs"
	E0812 10:39:45.447082       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6v7rs\": pod kindnet-6v7rs is already assigned to node \"ha-919901-m03\"" pod="kube-system/kindnet-6v7rs"
	I0812 10:39:45.447108       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6v7rs" node="ha-919901-m03"
	E0812 10:39:45.561301       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xjhsb\": pod kube-proxy-xjhsb is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xjhsb" node="ha-919901-m03"
	E0812 10:39:45.561578       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b68bad98-fc42-4b06-beac-91bcaef3749c(kube-system/kube-proxy-xjhsb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xjhsb"
	E0812 10:39:45.561672       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xjhsb\": pod kube-proxy-xjhsb is already assigned to node \"ha-919901-m03\"" pod="kube-system/kube-proxy-xjhsb"
	I0812 10:39:45.561699       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xjhsb" node="ha-919901-m03"
	E0812 10:40:14.546495       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v6ddx\": pod busybox-fc5497c4f-v6ddx is already assigned to node \"ha-919901-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v6ddx" node="ha-919901-m03"
	E0812 10:40:14.546746       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 06fbbe15-dd57-4276-b19d-9c6c7ea2ea44(default/busybox-fc5497c4f-v6ddx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-v6ddx"
	E0812 10:40:14.547178       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v6ddx\": pod busybox-fc5497c4f-v6ddx is already assigned to node \"ha-919901-m03\"" pod="default/busybox-fc5497c4f-v6ddx"
	I0812 10:40:14.547314       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v6ddx" node="ha-919901-m03"
	E0812 10:40:14.584416       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pj8gg\": pod busybox-fc5497c4f-pj8gg is already assigned to node \"ha-919901\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pj8gg" node="ha-919901"
	E0812 10:40:14.584474       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b9a02941-b2f3-4ffe-bdca-07a7322887b1(default/busybox-fc5497c4f-pj8gg) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pj8gg"
	E0812 10:40:14.584494       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pj8gg\": pod busybox-fc5497c4f-pj8gg is already assigned to node \"ha-919901\"" pod="default/busybox-fc5497c4f-pj8gg"
	I0812 10:40:14.584510       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pj8gg" node="ha-919901"
	E0812 10:40:14.594617       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-46rph\": pod busybox-fc5497c4f-46rph is already assigned to node \"ha-919901-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-46rph" node="ha-919901-m02"
	E0812 10:40:14.594677       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1851351d-2c94-43c9-b72e-87f74b2326db(default/busybox-fc5497c4f-46rph) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-46rph"
	E0812 10:40:14.594693       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-46rph\": pod busybox-fc5497c4f-46rph is already assigned to node \"ha-919901-m02\"" pod="default/busybox-fc5497c4f-46rph"
	I0812 10:40:14.594711       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-46rph" node="ha-919901-m02"
	
	
	==> kubelet <==
	Aug 12 10:40:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:40:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.581605    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=166.581556293 podStartE2EDuration="2m46.581556293s" podCreationTimestamp="2024-08-12 10:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 10:37:45.760893349 +0000 UTC m=+31.457240289" watchObservedRunningTime="2024-08-12 10:40:14.581556293 +0000 UTC m=+180.277903240"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.586171    1369 topology_manager.go:215] "Topology Admit Handler" podUID="b9a02941-b2f3-4ffe-bdca-07a7322887b1" podNamespace="default" podName="busybox-fc5497c4f-pj8gg"
	Aug 12 10:40:14 ha-919901 kubelet[1369]: I0812 10:40:14.641285    1369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4htt\" (UniqueName: \"kubernetes.io/projected/b9a02941-b2f3-4ffe-bdca-07a7322887b1-kube-api-access-d4htt\") pod \"busybox-fc5497c4f-pj8gg\" (UID: \"b9a02941-b2f3-4ffe-bdca-07a7322887b1\") " pod="default/busybox-fc5497c4f-pj8gg"
	Aug 12 10:41:14 ha-919901 kubelet[1369]: E0812 10:41:14.517575    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:41:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:41:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:42:14 ha-919901 kubelet[1369]: E0812 10:42:14.517152    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:42:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:42:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:43:14 ha-919901 kubelet[1369]: E0812 10:43:14.515710    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:43:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:43:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:44:14 ha-919901 kubelet[1369]: E0812 10:44:14.515416    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:44:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:44:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:44:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:44:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-919901 -n ha-919901
helpers_test.go:261: (dbg) Run:  kubectl --context ha-919901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-919901 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-919901 -v=7 --alsologtostderr
E0812 10:45:45.936010   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:46:13.620258   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-919901 -v=7 --alsologtostderr: exit status 82 (2m1.783169521s)

                                                
                                                
-- stdout --
	* Stopping node "ha-919901-m04"  ...
	* Stopping node "ha-919901-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:44:52.863533   28062 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:44:52.863693   28062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:52.863704   28062 out.go:304] Setting ErrFile to fd 2...
	I0812 10:44:52.863711   28062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:44:52.864042   28062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:44:52.864313   28062 out.go:298] Setting JSON to false
	I0812 10:44:52.864403   28062 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:52.864759   28062 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:52.864837   28062 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:44:52.865033   28062 mustload.go:65] Loading cluster: ha-919901
	I0812 10:44:52.865181   28062 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:44:52.865212   28062 stop.go:39] StopHost: ha-919901-m04
	I0812 10:44:52.865594   28062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:52.865631   28062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:52.880828   28062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0812 10:44:52.881418   28062 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:52.882017   28062 main.go:141] libmachine: Using API Version  1
	I0812 10:44:52.882042   28062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:52.882385   28062 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:52.885218   28062 out.go:177] * Stopping node "ha-919901-m04"  ...
	I0812 10:44:52.886505   28062 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 10:44:52.886538   28062 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:44:52.886789   28062 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 10:44:52.886810   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:44:52.890084   28062 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:52.890606   28062 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:40:36 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:44:52.890642   28062 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:44:52.890842   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:44:52.891027   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:44:52.891216   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:44:52.891426   28062 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:44:52.971021   28062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 10:44:53.023538   28062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 10:44:53.076036   28062 main.go:141] libmachine: Stopping "ha-919901-m04"...
	I0812 10:44:53.076072   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:53.077837   28062 main.go:141] libmachine: (ha-919901-m04) Calling .Stop
	I0812 10:44:53.081176   28062 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 0/120
	I0812 10:44:54.165309   28062 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:44:54.166734   28062 main.go:141] libmachine: Machine "ha-919901-m04" was stopped.
	I0812 10:44:54.166749   28062 stop.go:75] duration metric: took 1.280248769s to stop
	I0812 10:44:54.166779   28062 stop.go:39] StopHost: ha-919901-m03
	I0812 10:44:54.167069   28062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:44:54.167111   28062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:44:54.182781   28062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0812 10:44:54.183318   28062 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:44:54.183896   28062 main.go:141] libmachine: Using API Version  1
	I0812 10:44:54.183919   28062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:44:54.184220   28062 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:44:54.185942   28062 out.go:177] * Stopping node "ha-919901-m03"  ...
	I0812 10:44:54.187439   28062 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 10:44:54.187464   28062 main.go:141] libmachine: (ha-919901-m03) Calling .DriverName
	I0812 10:44:54.187698   28062 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 10:44:54.187721   28062 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHHostname
	I0812 10:44:54.190815   28062 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:54.191410   28062 main.go:141] libmachine: (ha-919901-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:b2", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:07 +0000 UTC Type:0 Mac:52:54:00:0f:9a:b2 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-919901-m03 Clientid:01:52:54:00:0f:9a:b2}
	I0812 10:44:54.191457   28062 main.go:141] libmachine: (ha-919901-m03) DBG | domain ha-919901-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:0f:9a:b2 in network mk-ha-919901
	I0812 10:44:54.191571   28062 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHPort
	I0812 10:44:54.191747   28062 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHKeyPath
	I0812 10:44:54.191911   28062 main.go:141] libmachine: (ha-919901-m03) Calling .GetSSHUsername
	I0812 10:44:54.192050   28062 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m03/id_rsa Username:docker}
	I0812 10:44:54.281403   28062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 10:44:54.336190   28062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 10:44:54.392316   28062 main.go:141] libmachine: Stopping "ha-919901-m03"...
	I0812 10:44:54.392341   28062 main.go:141] libmachine: (ha-919901-m03) Calling .GetState
	I0812 10:44:54.393872   28062 main.go:141] libmachine: (ha-919901-m03) Calling .Stop
	I0812 10:44:54.397370   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 0/120
	I0812 10:44:55.398822   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 1/120
	I0812 10:44:56.400418   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 2/120
	I0812 10:44:57.401958   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 3/120
	I0812 10:44:58.403542   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 4/120
	I0812 10:44:59.405245   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 5/120
	I0812 10:45:00.406834   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 6/120
	I0812 10:45:01.408362   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 7/120
	I0812 10:45:02.409702   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 8/120
	I0812 10:45:03.411375   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 9/120
	I0812 10:45:04.413859   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 10/120
	I0812 10:45:05.415419   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 11/120
	I0812 10:45:06.416770   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 12/120
	I0812 10:45:07.418320   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 13/120
	I0812 10:45:08.419654   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 14/120
	I0812 10:45:09.421655   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 15/120
	I0812 10:45:10.423340   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 16/120
	I0812 10:45:11.425188   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 17/120
	I0812 10:45:12.426678   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 18/120
	I0812 10:45:13.428253   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 19/120
	I0812 10:45:14.430620   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 20/120
	I0812 10:45:15.432187   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 21/120
	I0812 10:45:16.434259   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 22/120
	I0812 10:45:17.436014   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 23/120
	I0812 10:45:18.438118   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 24/120
	I0812 10:45:19.440018   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 25/120
	I0812 10:45:20.441744   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 26/120
	I0812 10:45:21.443568   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 27/120
	I0812 10:45:22.445108   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 28/120
	I0812 10:45:23.447691   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 29/120
	I0812 10:45:24.449757   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 30/120
	I0812 10:45:25.451263   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 31/120
	I0812 10:45:26.453092   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 32/120
	I0812 10:45:27.454315   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 33/120
	I0812 10:45:28.456057   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 34/120
	I0812 10:45:29.457840   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 35/120
	I0812 10:45:30.459424   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 36/120
	I0812 10:45:31.460929   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 37/120
	I0812 10:45:32.462226   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 38/120
	I0812 10:45:33.463705   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 39/120
	I0812 10:45:34.465909   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 40/120
	I0812 10:45:35.467635   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 41/120
	I0812 10:45:36.469292   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 42/120
	I0812 10:45:37.470675   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 43/120
	I0812 10:45:38.472029   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 44/120
	I0812 10:45:39.474076   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 45/120
	I0812 10:45:40.475477   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 46/120
	I0812 10:45:41.476983   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 47/120
	I0812 10:45:42.478471   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 48/120
	I0812 10:45:43.480027   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 49/120
	I0812 10:45:44.481808   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 50/120
	I0812 10:45:45.483200   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 51/120
	I0812 10:45:46.484614   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 52/120
	I0812 10:45:47.485803   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 53/120
	I0812 10:45:48.486967   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 54/120
	I0812 10:45:49.488266   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 55/120
	I0812 10:45:50.489519   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 56/120
	I0812 10:45:51.490657   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 57/120
	I0812 10:45:52.491858   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 58/120
	I0812 10:45:53.493223   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 59/120
	I0812 10:45:54.495072   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 60/120
	I0812 10:45:55.496738   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 61/120
	I0812 10:45:56.498331   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 62/120
	I0812 10:45:57.499571   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 63/120
	I0812 10:45:58.501201   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 64/120
	I0812 10:45:59.503162   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 65/120
	I0812 10:46:00.504624   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 66/120
	I0812 10:46:01.506981   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 67/120
	I0812 10:46:02.509152   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 68/120
	I0812 10:46:03.510666   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 69/120
	I0812 10:46:04.512453   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 70/120
	I0812 10:46:05.514608   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 71/120
	I0812 10:46:06.516081   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 72/120
	I0812 10:46:07.517938   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 73/120
	I0812 10:46:08.519662   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 74/120
	I0812 10:46:09.521609   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 75/120
	I0812 10:46:10.523518   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 76/120
	I0812 10:46:11.524704   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 77/120
	I0812 10:46:12.526409   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 78/120
	I0812 10:46:13.527828   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 79/120
	I0812 10:46:14.529226   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 80/120
	I0812 10:46:15.531746   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 81/120
	I0812 10:46:16.532987   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 82/120
	I0812 10:46:17.534438   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 83/120
	I0812 10:46:18.535752   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 84/120
	I0812 10:46:19.537121   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 85/120
	I0812 10:46:20.538472   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 86/120
	I0812 10:46:21.539692   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 87/120
	I0812 10:46:22.541281   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 88/120
	I0812 10:46:23.542689   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 89/120
	I0812 10:46:24.545024   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 90/120
	I0812 10:46:25.546386   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 91/120
	I0812 10:46:26.548612   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 92/120
	I0812 10:46:27.550108   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 93/120
	I0812 10:46:28.551607   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 94/120
	I0812 10:46:29.553705   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 95/120
	I0812 10:46:30.555163   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 96/120
	I0812 10:46:31.556521   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 97/120
	I0812 10:46:32.557995   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 98/120
	I0812 10:46:33.559720   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 99/120
	I0812 10:46:34.561871   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 100/120
	I0812 10:46:35.563422   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 101/120
	I0812 10:46:36.564788   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 102/120
	I0812 10:46:37.566236   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 103/120
	I0812 10:46:38.567821   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 104/120
	I0812 10:46:39.570262   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 105/120
	I0812 10:46:40.571858   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 106/120
	I0812 10:46:41.573316   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 107/120
	I0812 10:46:42.575107   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 108/120
	I0812 10:46:43.576538   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 109/120
	I0812 10:46:44.577973   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 110/120
	I0812 10:46:45.579447   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 111/120
	I0812 10:46:46.580898   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 112/120
	I0812 10:46:47.582282   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 113/120
	I0812 10:46:48.583803   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 114/120
	I0812 10:46:49.585302   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 115/120
	I0812 10:46:50.587640   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 116/120
	I0812 10:46:51.589480   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 117/120
	I0812 10:46:52.590905   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 118/120
	I0812 10:46:53.592559   28062 main.go:141] libmachine: (ha-919901-m03) Waiting for machine to stop 119/120
	I0812 10:46:54.593292   28062 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 10:46:54.593371   28062 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 10:46:54.595723   28062 out.go:177] 
	W0812 10:46:54.597156   28062 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 10:46:54.597180   28062 out.go:239] * 
	* 
	W0812 10:46:54.599396   28062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 10:46:54.601010   28062 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-919901 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-919901 --wait=true -v=7 --alsologtostderr
E0812 10:48:30.975648   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:49:54.021081   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:50:45.937847   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-919901 --wait=true -v=7 --alsologtostderr: (4m10.309100037s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-919901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-919901 -n ha-919901
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-919901 logs -n 25: (1.949559788s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m04 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp testdata/cp-test.txt                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m04_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03:/home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m03 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-919901 node stop m02 -v=7                                                     | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-919901 node start m02 -v=7                                                    | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-919901 -v=7                                                           | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-919901 -v=7                                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-919901 --wait=true -v=7                                                    | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:46 UTC | 12 Aug 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-919901                                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:51 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:46:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:46:54.647303   28520 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:46:54.647590   28520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:54.647609   28520 out.go:304] Setting ErrFile to fd 2...
	I0812 10:46:54.647616   28520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:54.647853   28520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:46:54.648452   28520 out.go:298] Setting JSON to false
	I0812 10:46:54.649433   28520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1756,"bootTime":1723457859,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:46:54.649493   28520 start.go:139] virtualization: kvm guest
	I0812 10:46:54.651834   28520 out.go:177] * [ha-919901] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:46:54.653195   28520 notify.go:220] Checking for updates...
	I0812 10:46:54.653229   28520 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:46:54.654788   28520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:46:54.656530   28520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:46:54.658000   28520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:46:54.659568   28520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:46:54.661268   28520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:46:54.663351   28520 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:46:54.663458   28520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:46:54.663921   28520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:46:54.663973   28520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:54.679249   28520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0812 10:46:54.679709   28520 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:54.680222   28520 main.go:141] libmachine: Using API Version  1
	I0812 10:46:54.680250   28520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:54.680639   28520 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:54.680927   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.719607   28520 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 10:46:54.721188   28520 start.go:297] selected driver: kvm2
	I0812 10:46:54.721211   28520 start.go:901] validating driver "kvm2" against &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:46:54.721398   28520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:46:54.721757   28520 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:46:54.721855   28520 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:46:54.737988   28520 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:46:54.738726   28520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:46:54.738801   28520 cni.go:84] Creating CNI manager for ""
	I0812 10:46:54.738818   28520 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 10:46:54.738886   28520 start.go:340] cluster config:
	{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:46:54.739030   28520 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:46:54.740898   28520 out.go:177] * Starting "ha-919901" primary control-plane node in "ha-919901" cluster
	I0812 10:46:54.742273   28520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:46:54.742354   28520 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:46:54.742370   28520 cache.go:56] Caching tarball of preloaded images
	I0812 10:46:54.742474   28520 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:46:54.742488   28520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:46:54.742658   28520 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:46:54.742949   28520 start.go:360] acquireMachinesLock for ha-919901: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:46:54.743026   28520 start.go:364] duration metric: took 55.667µs to acquireMachinesLock for "ha-919901"
	I0812 10:46:54.743048   28520 start.go:96] Skipping create...Using existing machine configuration
	I0812 10:46:54.743056   28520 fix.go:54] fixHost starting: 
	I0812 10:46:54.743384   28520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:46:54.743434   28520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:54.758432   28520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I0812 10:46:54.758851   28520 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:54.759512   28520 main.go:141] libmachine: Using API Version  1
	I0812 10:46:54.759532   28520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:54.759901   28520 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:54.760119   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.760266   28520 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:46:54.761855   28520 fix.go:112] recreateIfNeeded on ha-919901: state=Running err=<nil>
	W0812 10:46:54.761871   28520 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 10:46:54.763895   28520 out.go:177] * Updating the running kvm2 "ha-919901" VM ...
	I0812 10:46:54.765301   28520 machine.go:94] provisionDockerMachine start ...
	I0812 10:46:54.765330   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.765577   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:54.768339   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.768792   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:54.768818   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.768997   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:54.769194   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.769362   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.769532   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:54.769718   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:54.769908   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:54.769919   28520 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 10:46:54.881894   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:46:54.881918   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:54.882149   28520 buildroot.go:166] provisioning hostname "ha-919901"
	I0812 10:46:54.882170   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:54.882393   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:54.885299   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.885829   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:54.885856   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.886051   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:54.886287   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.886467   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.886596   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:54.886758   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:54.886926   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:54.886938   28520 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901 && echo "ha-919901" | sudo tee /etc/hostname
	I0812 10:46:55.015696   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:46:55.015721   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.018782   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.019262   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.019283   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.019514   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.019731   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.019904   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.020033   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.020201   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:55.020372   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:55.020387   28520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:46:55.138466   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:46:55.138503   28520 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:46:55.138520   28520 buildroot.go:174] setting up certificates
	I0812 10:46:55.138528   28520 provision.go:84] configureAuth start
	I0812 10:46:55.138536   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:55.138808   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:46:55.141593   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.141932   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.141952   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.142072   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.144412   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.144837   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.144860   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.145016   28520 provision.go:143] copyHostCerts
	I0812 10:46:55.145057   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:46:55.145093   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:46:55.145102   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:46:55.145173   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:46:55.145264   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:46:55.145281   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:46:55.145285   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:46:55.145309   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:46:55.145363   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:46:55.145379   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:46:55.145385   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:46:55.145408   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:46:55.145475   28520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901 san=[127.0.0.1 192.168.39.5 ha-919901 localhost minikube]
	I0812 10:46:55.466340   28520 provision.go:177] copyRemoteCerts
	I0812 10:46:55.466412   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:46:55.466439   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.469148   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.469526   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.469569   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.469692   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.469935   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.470166   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.470387   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:46:55.556143   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:46:55.556225   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:46:55.585239   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:46:55.585304   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:46:55.614632   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:46:55.614716   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0812 10:46:55.641009   28520 provision.go:87] duration metric: took 502.4708ms to configureAuth
	I0812 10:46:55.641036   28520 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:46:55.641269   28520 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:46:55.641356   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.643952   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.644448   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.644486   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.644666   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.644883   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.645040   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.645186   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.645331   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:55.645518   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:55.645539   28520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:48:26.551171   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:48:26.551195   28520 machine.go:97] duration metric: took 1m31.785877087s to provisionDockerMachine
	I0812 10:48:26.551206   28520 start.go:293] postStartSetup for "ha-919901" (driver="kvm2")
	I0812 10:48:26.551223   28520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:48:26.551236   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.551612   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:48:26.551648   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.554801   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.555244   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.555270   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.555542   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.555785   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.555978   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.556117   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:26.645400   28520 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:48:26.649890   28520 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:48:26.649913   28520 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:48:26.649972   28520 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:48:26.650066   28520 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:48:26.650077   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:48:26.650155   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:48:26.659882   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:48:26.687091   28520 start.go:296] duration metric: took 135.864438ms for postStartSetup
	I0812 10:48:26.687149   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.687437   28520 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0812 10:48:26.687460   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.690115   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.690471   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.690498   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.690653   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.690948   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.691179   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.691415   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	W0812 10:48:26.775490   28520 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0812 10:48:26.775519   28520 fix.go:56] duration metric: took 1m32.03246339s for fixHost
	I0812 10:48:26.775541   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.778127   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.778484   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.778507   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.778677   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.778897   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.779056   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.779175   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.779321   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:48:26.779484   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:48:26.779494   28520 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:48:26.889744   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459706.854157384
	
	I0812 10:48:26.889769   28520 fix.go:216] guest clock: 1723459706.854157384
	I0812 10:48:26.889776   28520 fix.go:229] Guest: 2024-08-12 10:48:26.854157384 +0000 UTC Remote: 2024-08-12 10:48:26.775526324 +0000 UTC m=+92.165330545 (delta=78.63106ms)
	I0812 10:48:26.889794   28520 fix.go:200] guest clock delta is within tolerance: 78.63106ms
	I0812 10:48:26.889799   28520 start.go:83] releasing machines lock for "ha-919901", held for 1m32.146762409s
	I0812 10:48:26.889817   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.890098   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:48:26.892737   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.893183   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.893216   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.893455   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.893974   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.894206   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.894295   28520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:48:26.894343   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.894445   28520 ssh_runner.go:195] Run: cat /version.json
	I0812 10:48:26.894463   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.897068   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897474   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.897502   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897521   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897644   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.897802   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.897965   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.897988   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.898012   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.898146   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:26.898168   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.898313   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.898467   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.898610   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:27.012954   28520 ssh_runner.go:195] Run: systemctl --version
	I0812 10:48:27.019846   28520 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:48:27.181931   28520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:48:27.188435   28520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:48:27.188510   28520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:48:27.197607   28520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 10:48:27.197630   28520 start.go:495] detecting cgroup driver to use...
	I0812 10:48:27.197689   28520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:48:27.214884   28520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:48:27.229268   28520 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:48:27.229374   28520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:48:27.243258   28520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:48:27.256804   28520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:48:27.405651   28520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:48:27.552354   28520 docker.go:233] disabling docker service ...
	I0812 10:48:27.552437   28520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:48:27.569174   28520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:48:27.583125   28520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:48:27.727277   28520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:48:27.874360   28520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:48:27.888390   28520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:48:27.909232   28520 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:48:27.909284   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.919808   28520 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:48:27.919881   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.930266   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.940829   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.951425   28520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:48:27.962304   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.973178   28520 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.984696   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.995115   28520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:48:28.004730   28520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:48:28.014083   28520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:48:28.159247   28520 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:48:35.567027   28520 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.40774347s)
	I0812 10:48:35.567055   28520 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:48:35.567123   28520 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:48:35.571931   28520 start.go:563] Will wait 60s for crictl version
	I0812 10:48:35.571999   28520 ssh_runner.go:195] Run: which crictl
	I0812 10:48:35.576285   28520 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:48:35.616512   28520 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:48:35.616589   28520 ssh_runner.go:195] Run: crio --version
	I0812 10:48:35.646316   28520 ssh_runner.go:195] Run: crio --version
	I0812 10:48:35.676080   28520 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:48:35.677507   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:48:35.680220   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:35.680690   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:35.680718   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:35.681012   28520 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:48:35.685887   28520 kubeadm.go:883] updating cluster {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:48:35.686032   28520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:48:35.686076   28520 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:48:35.729838   28520 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:48:35.729862   28520 crio.go:433] Images already preloaded, skipping extraction
	I0812 10:48:35.729906   28520 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:48:35.766383   28520 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:48:35.766406   28520 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:48:35.766414   28520 kubeadm.go:934] updating node { 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0812 10:48:35.766504   28520 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:48:35.766569   28520 ssh_runner.go:195] Run: crio config
	I0812 10:48:35.816179   28520 cni.go:84] Creating CNI manager for ""
	I0812 10:48:35.816200   28520 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 10:48:35.816211   28520 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:48:35.816245   28520 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-919901 NodeName:ha-919901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:48:35.816413   28520 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-919901"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:48:35.816436   28520 kube-vip.go:115] generating kube-vip config ...
	I0812 10:48:35.816485   28520 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:48:35.827685   28520 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:48:35.827806   28520 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:48:35.827874   28520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:48:35.837344   28520 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:48:35.837424   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 10:48:35.846700   28520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0812 10:48:35.863467   28520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:48:35.880185   28520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0812 10:48:35.896905   28520 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:48:35.913728   28520 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:48:35.918556   28520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:48:36.063675   28520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:48:36.078652   28520 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.5
	I0812 10:48:36.078679   28520 certs.go:194] generating shared ca certs ...
	I0812 10:48:36.078698   28520 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.078871   28520 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:48:36.078927   28520 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:48:36.078939   28520 certs.go:256] generating profile certs ...
	I0812 10:48:36.079048   28520 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:48:36.079083   28520 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da
	I0812 10:48:36.079116   28520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.195 192.168.39.254]
	I0812 10:48:36.322084   28520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da ...
	I0812 10:48:36.322116   28520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da: {Name:mk95510ba6d23b1a8b9a440efe74085f486357b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.322281   28520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da ...
	I0812 10:48:36.322292   28520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da: {Name:mk5a5edb5733fe7a10dc1627b88ff9518edb7b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.322365   28520 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:48:36.322526   28520 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:48:36.322646   28520 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:48:36.322663   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:48:36.322675   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:48:36.322717   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:48:36.322737   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:48:36.322749   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:48:36.322762   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:48:36.322774   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:48:36.322786   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:48:36.322829   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:48:36.322855   28520 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:48:36.322865   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:48:36.322887   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:48:36.322907   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:48:36.322928   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:48:36.322963   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:48:36.322989   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.323003   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.323015   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.323581   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:48:36.349235   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:48:36.372664   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:48:36.396478   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:48:36.420885   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 10:48:36.446496   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 10:48:36.470700   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:48:36.494793   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:48:36.519235   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:48:36.543049   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:48:36.567499   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:48:36.591458   28520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:48:36.608293   28520 ssh_runner.go:195] Run: openssl version
	I0812 10:48:36.614417   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:48:36.625750   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.630462   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.630526   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.636215   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:48:36.646197   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:48:36.657324   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.662003   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.662072   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.667650   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:48:36.677606   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:48:36.689338   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.693804   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.693878   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.699797   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:48:36.711165   28520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:48:36.715948   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 10:48:36.722003   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 10:48:36.727835   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 10:48:36.733758   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 10:48:36.739899   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 10:48:36.745475   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 10:48:36.751494   28520 kubeadm.go:392] StartCluster: {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:48:36.751643   28520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:48:36.751698   28520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:48:36.790666   28520 cri.go:89] found id: "a8766116f2e58d7532c947c56192d66b7cdc96b2954f05c3a7e3999a645c5edc"
	I0812 10:48:36.790693   28520 cri.go:89] found id: "10b588fc239e3d3313ca309e1f13be69d19663d8914ac6cbccaa255b1f5a1192"
	I0812 10:48:36.790699   28520 cri.go:89] found id: "7a668d0f8e974a7ccd5a60e3be4f4d50b878d943bc7a9e3da000080ca527cd67"
	I0812 10:48:36.790704   28520 cri.go:89] found id: "7fed01d7160560309c4ee6b8b6f4ee49e2169be938b7bd960d22a6e413d73e4f"
	I0812 10:48:36.790708   28520 cri.go:89] found id: "6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8"
	I0812 10:48:36.790713   28520 cri.go:89] found id: "ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b"
	I0812 10:48:36.790717   28520 cri.go:89] found id: "4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf"
	I0812 10:48:36.790722   28520 cri.go:89] found id: "7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f"
	I0812 10:48:36.790726   28520 cri.go:89] found id: "52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed"
	I0812 10:48:36.790733   28520 cri.go:89] found id: "2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf"
	I0812 10:48:36.790742   28520 cri.go:89] found id: "0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14"
	I0812 10:48:36.790747   28520 cri.go:89] found id: "2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f"
	I0812 10:48:36.790751   28520 cri.go:89] found id: "e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e"
	I0812 10:48:36.790755   28520 cri.go:89] found id: ""
	I0812 10:48:36.790810   28520 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.776832448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459865776810545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8810e47-8457-4875-b9cd-7ff4658f96fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.777521508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aa99ab6-160b-4c81-9551-0445902271ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.777581750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aa99ab6-160b-4c81-9551-0445902271ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.777970962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3aa99ab6-160b-4c81-9551-0445902271ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.830041337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58dde673-4ad1-4a7c-b34a-5312f4567704 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.830126443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58dde673-4ad1-4a7c-b34a-5312f4567704 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.831521865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6995cb17-540b-47d3-9779-61f8faf19e4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.831983757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459865831961981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6995cb17-540b-47d3-9779-61f8faf19e4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.832646615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78b6b814-e0a8-40a8-998f-714bcaae7ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.832703243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78b6b814-e0a8-40a8-998f-714bcaae7ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.833163044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78b6b814-e0a8-40a8-998f-714bcaae7ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.874798609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7074d71-110e-4727-a34c-ef9a0bd48442 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.874871168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7074d71-110e-4727-a34c-ef9a0bd48442 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.875837383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99288440-53d7-4ead-a2f7-8def98c15c91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.876563817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459865876538531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99288440-53d7-4ead-a2f7-8def98c15c91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.877015224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26914dee-6232-485b-a5b1-f740f276db12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.877069736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26914dee-6232-485b-a5b1-f740f276db12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.877519588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26914dee-6232-485b-a5b1-f740f276db12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.923819467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eec65313-57ab-44d7-8a70-7ef29881f6f8 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.923926178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eec65313-57ab-44d7-8a70-7ef29881f6f8 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.925122053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c59d6a1-b6e0-4108-9791-62914faa322e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.925900174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723459865925873533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c59d6a1-b6e0-4108-9791-62914faa322e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.926587507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cec15c7-c75e-4b75-8130-597b21274989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.926682382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cec15c7-c75e-4b75-8130-597b21274989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:51:05 ha-919901 crio[3808]: time="2024-08-12 10:51:05.927141019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cec15c7-c75e-4b75-8130-597b21274989 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1feef8d0a7509       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   b75bef0d429e5       storage-provisioner
	9e1fc5e390923       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   b77024a4392f2       kube-apiserver-ha-919901
	75c65bbec166c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   b50fc8ef65be2       kube-controller-manager-ha-919901
	02a975906041d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   af3930beb96f2       busybox-fc5497c4f-pj8gg
	8a2fc5ccdb449       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   f299608085a73       kube-vip-ha-919901
	d6976ec7a56e8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   27c3c8acb9273       kube-proxy-ftvfl
	bc6462a604f64       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      2 minutes ago        Running             kindnet-cni               1                   6da31f89d702c       kindnet-k5wz9
	819e54d9554ed       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   66d278adbf4b5       etcd-ha-919901
	ee56da3827469       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c588fd38b169b       coredns-7db6d8ff4d-wstd4
	1f2b335c58f4e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   b50fc8ef65be2       kube-controller-manager-ha-919901
	1766e0cc1e04c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   b75bef0d429e5       storage-provisioner
	fc2643d16d41c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   34445bb6eb65c       kube-scheduler-ha-919901
	40a98a9a1e936       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   b77024a4392f2       kube-apiserver-ha-919901
	87dc2b222be5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   9201197c1ac54       coredns-7db6d8ff4d-rc7cl
	8542d2fe34f2b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   40dfaa461230a       busybox-fc5497c4f-pj8gg
	6d0c6b246369b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   7ee3eb4b0b10e       coredns-7db6d8ff4d-wstd4
	ec7364f484b0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   a88f690225d3f       coredns-7db6d8ff4d-rc7cl
	4d3c2394cc8cd       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    13 minutes ago       Exited              kindnet-cni               0                   2abd5fefba6f3       kindnet-k5wz9
	7cd3e13fb2b3b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   b7d28551c45a6       kube-proxy-ftvfl
	2af78571207ce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   06243d97384e5       kube-scheduler-ha-919901
	0c30877cfdcca       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   fae04d253fe0c       etcd-ha-919901
	
	
	==> coredns [6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8] <==
	[INFO] 10.244.1.2:41656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234118s
	[INFO] 10.244.1.2:37332 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027744s
	[INFO] 10.244.1.2:40223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010736666s
	[INFO] 10.244.0.4:34313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099644s
	[INFO] 10.244.0.4:42226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013952s
	[INFO] 10.244.0.4:57222 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017573s
	[INFO] 10.244.0.4:58894 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088282s
	[INFO] 10.244.2.2:46163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143718s
	[INFO] 10.244.2.2:51332 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158612s
	[INFO] 10.244.2.2:38508 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102467s
	[INFO] 10.244.1.2:36638 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127128s
	[INFO] 10.244.1.2:48634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196174s
	[INFO] 10.244.1.2:34717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153611s
	[INFO] 10.244.1.2:59132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121069s
	[INFO] 10.244.0.4:52263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018165s
	[INFO] 10.244.0.4:33949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137401s
	[INFO] 10.244.0.4:50775 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059871s
	[INFO] 10.244.2.2:49015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152696s
	[INFO] 10.244.2.2:39997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159415s
	[INFO] 10.244.2.2:33769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094484s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1167407080]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:52.406) (total time: 10001ms):
	Trace[1167407080]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:49:02.408)
	Trace[1167407080]: [10.001690288s] [10.001690288s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1346001048]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:52.422) (total time: 10001ms):
	Trace[1346001048]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:49:02.424)
	Trace[1346001048]: [10.001780624s] [10.001780624s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b] <==
	[INFO] 10.244.0.4:36852 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079487s
	[INFO] 10.244.2.2:51413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945024s
	[INFO] 10.244.2.2:47991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079163s
	[INFO] 10.244.2.2:37019 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001502663s
	[INFO] 10.244.2.2:54793 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077144s
	[INFO] 10.244.2.2:58782 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056455s
	[INFO] 10.244.1.2:54292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137507s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089729s
	[INFO] 10.244.0.4:40377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115376s
	[INFO] 10.244.0.4:56017 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088959s
	[INFO] 10.244.0.4:52411 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057997s
	[INFO] 10.244.0.4:46999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005214s
	[INFO] 10.244.2.2:42855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167607s
	[INFO] 10.244.2.2:43154 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117622s
	[INFO] 10.244.2.2:33056 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087079s
	[INFO] 10.244.2.2:52436 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114815s
	[INFO] 10.244.1.2:57727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129686s
	[INFO] 10.244.1.2:60878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018786s
	[INFO] 10.244.0.4:47644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114448s
	[INFO] 10.244.2.2:38930 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159722s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fea5df5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[814258677]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:53.504) (total time: 10299ms):
	Trace[814258677]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer 10299ms (10:49:03.803)
	Trace[814258677]: [10.299732562s] [10.299732562s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1920876370]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:53.845) (total time: 14045ms):
	Trace[1920876370]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer 14045ms (10:49:07.890)
	Trace[1920876370]: [14.045969474s] [14.045969474s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-919901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:49:30 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:49:30 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:49:30 +0000   Mon, 12 Aug 2024 10:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:49:30 +0000   Mon, 12 Aug 2024 10:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-919901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0604b91ac2ed4dfdb4f1eba3f89f2634
	  System UUID:                0604b91a-c2ed-4dfd-b4f1-eba3f89f2634
	  Boot ID:                    e69dd59d-8862-4943-a8be-e27de6624ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pj8gg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-rc7cl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-wstd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-919901                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-k5wz9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-919901             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-919901    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-ftvfl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-919901             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-919901                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-919901 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-919901 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-919901 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-919901 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Warning  ContainerGCFailed        2m52s (x2 over 3m52s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           92s                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           87s                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           34s                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	
	
	Name:               ha-919901-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:49:59 +0000   Mon, 12 Aug 2024 10:49:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:49:59 +0000   Mon, 12 Aug 2024 10:49:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:49:59 +0000   Mon, 12 Aug 2024 10:49:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:49:59 +0000   Mon, 12 Aug 2024 10:49:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-919901-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2d78288ee7d4cf8b54a7dd9f4bdd0a2
	  System UUID:                b2d78288-ee7d-4cf8-b54a-7dd9f4bdd0a2
	  Boot ID:                    d72cd250-7bd8-4d68-95c5-1f7c57ad2cfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-46rph                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-919901-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-8cqm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-919901-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-919901-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-cczfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-919901-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-919901-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  NodeNotReady             9m3s                 node-controller  Node ha-919901-m02 status is now: NodeNotReady
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           34s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	
	
	Name:               ha-919901-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_39_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:39:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:50:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:50:38 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:50:38 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:50:38 +0000   Mon, 12 Aug 2024 10:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:50:38 +0000   Mon, 12 Aug 2024 10:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-919901-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 018b12c9070f4bf48440eace9c0062df
	  System UUID:                018b12c9-070f-4bf4-8440-eace9c0062df
	  Boot ID:                    b6c33084-1330-4f63-88e1-22d7fd4dc66b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v6ddx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-919901-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6v7rs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-919901-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-919901-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-6xqjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-919901-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-919901-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-919901-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-919901-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  58s                kubelet          Node ha-919901-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s                kubelet          Node ha-919901-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s                kubelet          Node ha-919901-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 58s                kubelet          Node ha-919901-m03 has been rebooted, boot id: b6c33084-1330-4f63-88e1-22d7fd4dc66b
	  Normal   RegisteredNode           34s                node-controller  Node ha-919901-m03 event: Registered Node ha-919901-m03 in Controller
	
	
	Name:               ha-919901-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_40_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:40:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:50:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:50:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:50:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:50:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    ha-919901-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9924b3342904c65bcf17b38012b444a
	  System UUID:                d9924b33-4290-4c65-bcf1-7b38012b444a
	  Boot ID:                    30fa988a-7807-41ac-b291-dc75074e230b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-clr9b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2h4vt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeReady                9m59s              kubelet          Node ha-919901-m04 status is now: NodeReady
	  Normal   RegisteredNode           92s                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeNotReady             52s                node-controller  Node ha-919901-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-919901-m04 has been rebooted, boot id: 30fa988a-7807-41ac-b291-dc75074e230b
	  Normal   NodeReady                9s                 kubelet          Node ha-919901-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.064986] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049228] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.190717] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.120674] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.278615] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Aug12 10:37] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +3.648433] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060066] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249848] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.088679] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.931862] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.868842] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 10:38] kauditd_printk_skb: 26 callbacks suppressed
	[Aug12 10:45] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 10:48] systemd-fstab-generator[3726]: Ignoring "noauto" option for root device
	[  +0.145695] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	[  +0.176311] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[  +0.152826] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
	[  +0.276685] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +7.905668] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[  +0.088150] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.352854] kauditd_printk_skb: 22 callbacks suppressed
	[ +11.859150] kauditd_printk_skb: 76 callbacks suppressed
	[Aug12 10:49] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.069363] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14] <==
	{"level":"info","ts":"2024-08-12T10:46:55.81048Z","caller":"traceutil/trace.go:171","msg":"trace[2146744767] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"249.115462ms","start":"2024-08-12T10:46:55.561358Z","end":"2024-08-12T10:46:55.810474Z","steps":["trace[2146744767] 'agreement among raft nodes before linearized reading'  (duration: 247.725009ms)"],"step_count":1}
	2024/08/12 10:46:55 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T10:46:55.808963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.340324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T10:46:55.81051Z","caller":"traceutil/trace.go:171","msg":"trace[1448295790] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"257.939097ms","start":"2024-08-12T10:46:55.552567Z","end":"2024-08-12T10:46:55.810506Z","steps":["trace[1448295790] 'agreement among raft nodes before linearized reading'  (duration: 256.347003ms)"],"step_count":1}
	2024/08/12 10:46:55 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T10:46:55.880022Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T10:46:55.880118Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T10:46:55.880343Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c5263387c79c0223","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-12T10:46:55.880582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.880829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.880896Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881009Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.88112Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881154Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881182Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881208Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.88131Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881423Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881469Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881519Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881547Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.884544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-08-12T10:46:55.884704Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-08-12T10:46:55.884755Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-919901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [819e54d9554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180] <==
	{"level":"warn","ts":"2024-08-12T10:50:02.778606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"adb6b1085391554e","rtt":"0s","error":"dial tcp 192.168.39.195:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-12T10:50:02.817947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:02.917019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:02.932651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:02.988548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:02.990385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:03.017918Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:03.11735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:03.122421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T10:50:06.206522Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.195:2380/version","remote-member-id":"adb6b1085391554e","error":"Get \"https://192.168.39.195:2380/version\": dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:06.206869Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"adb6b1085391554e","error":"Get \"https://192.168.39.195:2380/version\": dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:07.779564Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"adb6b1085391554e","rtt":"0s","error":"dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:07.779736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"adb6b1085391554e","rtt":"0s","error":"dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:10.208627Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.195:2380/version","remote-member-id":"adb6b1085391554e","error":"Get \"https://192.168.39.195:2380/version\": dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:10.208697Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"adb6b1085391554e","error":"Get \"https://192.168.39.195:2380/version\": dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:12.780789Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"adb6b1085391554e","rtt":"0s","error":"dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T10:50:12.780823Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"adb6b1085391554e","rtt":"0s","error":"dial tcp 192.168.39.195:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-12T10:50:13.256653Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.256904Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.258672Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.279051Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"adb6b1085391554e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-12T10:50:13.279109Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.290397Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"adb6b1085391554e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-12T10:50:13.290592Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:18.467764Z","caller":"traceutil/trace.go:171","msg":"trace[236088396] transaction","detail":"{read_only:false; response_revision:2358; number_of_response:1; }","duration":"162.322496ms","start":"2024-08-12T10:50:18.305398Z","end":"2024-08-12T10:50:18.467721Z","steps":["trace[236088396] 'process raft request'  (duration: 160.949792ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:51:06 up 14 min,  0 users,  load average: 0.43, 0.47, 0.34
	Linux ha-919901 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf] <==
	I0812 10:46:23.961889       1 main.go:299] handling current node
	I0812 10:46:33.952159       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:33.952190       1 main.go:299] handling current node
	I0812 10:46:33.952207       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:33.952213       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:33.952422       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:33.952444       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:33.952504       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:33.952522       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:46:43.951870       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:43.951923       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:46:43.952161       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:43.952183       1 main.go:299] handling current node
	I0812 10:46:43.952195       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:43.952200       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:43.952305       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:43.952348       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:53.960285       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:53.960336       1 main.go:299] handling current node
	I0812 10:46:53.960352       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:53.960358       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:53.960540       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:53.960563       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:53.960648       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:53.960669       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee] <==
	I0812 10:50:33.198643       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:50:43.197929       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:50:43.198027       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:50:43.198298       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:50:43.198331       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:50:43.198415       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:50:43.198436       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:50:43.198508       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:50:43.198528       1 main.go:299] handling current node
	I0812 10:50:53.205752       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:50:53.205866       1 main.go:299] handling current node
	I0812 10:50:53.205895       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:50:53.205913       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:50:53.206090       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:50:53.206138       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:50:53.206292       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:50:53.206325       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:51:03.197556       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:51:03.197603       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:51:03.197801       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:51:03.197822       1 main.go:299] handling current node
	I0812 10:51:03.197835       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:51:03.197840       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:51:03.197900       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:51:03.197916       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc] <==
	I0812 10:48:42.118944       1 options.go:221] external host was not specified, using 192.168.39.5
	I0812 10:48:42.122553       1 server.go:148] Version: v1.30.3
	I0812 10:48:42.122597       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:48:42.792922       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 10:48:42.795890       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 10:48:42.796073       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 10:48:42.796315       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 10:48:42.796395       1 instance.go:299] Using reconciler: lease
	W0812 10:49:02.784466       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 10:49:02.785764       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0812 10:49:02.797740       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790] <==
	I0812 10:49:27.292936       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0812 10:49:27.293420       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0812 10:49:27.329657       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 10:49:27.329689       1 policy_source.go:224] refreshing policies
	I0812 10:49:27.354372       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 10:49:27.363163       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 10:49:27.373986       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 10:49:27.375483       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 10:49:27.378387       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 10:49:27.378417       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 10:49:27.378634       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 10:49:27.376109       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 10:49:27.385896       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0812 10:49:27.391131       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.195]
	I0812 10:49:27.392637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 10:49:27.394450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 10:49:27.394480       1 aggregator.go:165] initial CRD sync complete...
	I0812 10:49:27.394500       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 10:49:27.394509       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 10:49:27.394514       1 cache.go:39] Caches are synced for autoregister controller
	I0812 10:49:27.399064       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0812 10:49:27.407531       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0812 10:49:28.283100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 10:49:28.754530       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.195 192.168.39.5]
	W0812 10:49:38.727178       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.5]
	
	
	==> kube-controller-manager [1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93] <==
	I0812 10:48:43.680200       1 serving.go:380] Generated self-signed cert in-memory
	I0812 10:48:44.001889       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 10:48:44.001932       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:48:44.003478       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 10:48:44.003638       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0812 10:48:44.003813       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 10:48:44.004018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0812 10:49:04.006623       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.5:8443/healthz\": dial tcp 192.168.39.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea] <==
	I0812 10:49:39.880165       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 10:49:39.900688       1 shared_informer.go:320] Caches are synced for taint
	I0812 10:49:39.900831       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0812 10:49:39.900948       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m04"
	I0812 10:49:39.900997       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901"
	I0812 10:49:39.901014       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m02"
	I0812 10:49:39.901038       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-919901-m03"
	I0812 10:49:39.902522       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0812 10:49:40.328329       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 10:49:40.328430       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 10:49:40.346079       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 10:49:42.291718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.64µs"
	I0812 10:49:45.039926       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qqnkt\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 10:49:45.042507       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7975b33c-8206-449f-a51d-014bbab1aaa2", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qqnkt": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:49:45.066182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.003428ms"
	I0812 10:49:45.066388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.719µs"
	I0812 10:49:55.044187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.56916ms"
	I0812 10:49:55.044393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="107.745µs"
	I0812 10:49:58.419436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.288416ms"
	I0812 10:49:58.421967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.616µs"
	I0812 10:50:09.515487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.617004ms"
	I0812 10:50:09.515618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.274µs"
	I0812 10:50:26.674285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.90127ms"
	I0812 10:50:26.674504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.356µs"
	I0812 10:50:57.924336       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-919901-m04"
	
	
	==> kube-proxy [7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f] <==
	E0812 10:45:51.925786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:54.996506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:54.996640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:54.996762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:54.996832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:58.068075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:58.068140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:01.140034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:01.140292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:01.140437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:01.140839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:04.212743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:04.212938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:10.355322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:10.355524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:13.428463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:13.428526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:13.428569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:13.428595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:25.714928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:25.715036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:31.859423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:31.860293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:41.075203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:41.075392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff] <==
	E0812 10:49:08.531098       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 10:49:26.994583       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 10:49:26.994681       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0812 10:49:27.041408       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:49:27.041480       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:49:27.041497       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:49:27.099601       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:49:27.099908       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:49:27.099935       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:49:27.107750       1 config.go:192] "Starting service config controller"
	I0812 10:49:27.107896       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:49:27.107953       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:49:27.107971       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:49:27.108951       1 config.go:319] "Starting node config controller"
	I0812 10:49:27.109031       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0812 10:49:30.035302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.035751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:49:30.036731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.036811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:49:30.036911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.036956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.037056       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 10:49:31.009157       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:49:31.312550       1 shared_informer.go:320] Caches are synced for node config
	I0812 10:49:31.508852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf] <==
	W0812 10:46:51.980092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:46:51.980135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:46:51.993536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:46:51.993588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:46:52.209057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:46:52.209097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:46:52.399930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:52.400066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:52.581837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 10:46:52.581885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 10:46:52.688037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:46:52.688110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 10:46:52.691566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 10:46:52.691718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 10:46:53.407780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:53.407863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:53.480788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:53.480867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:53.544618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 10:46:53.544748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 10:46:54.319664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 10:46:54.319709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 10:46:55.256623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:55.256677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:55.763703       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4] <==
	W0812 10:49:19.536633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:19.536791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:19.568709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:19.569553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:20.229284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:20.229362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:21.181283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:21.181430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:21.647788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:21.647837       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:22.486092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:22.486136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:22.661028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:22.661172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.026498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.026558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.319319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.319412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.348308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.348404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.578454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.578504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:24.329898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:24.329949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	I0812 10:49:32.808609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 10:49:17 ha-919901 kubelet[1369]: I0812 10:49:17.747292    1369 status_manager.go:853] "Failed to get status for pod" podUID="82b0f1622d3c68c0a51defdcc0ae67a3" pod="kube-system/kube-vip-ha-919901" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:20 ha-919901 kubelet[1369]: E0812 10:49:20.818652    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:20 ha-919901 kubelet[1369]: W0812 10:49:20.818658    1369 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 12 10:49:20 ha-919901 kubelet[1369]: I0812 10:49:20.819372    1369 status_manager.go:853] "Failed to get status for pod" podUID="37e967e3926409b9b4490fa429d62fdc" pod="kube-system/kube-apiserver-ha-919901" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:20 ha-919901 kubelet[1369]: E0812 10:49:20.819379    1369 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 12 10:49:22 ha-919901 kubelet[1369]: I0812 10:49:22.432315    1369 scope.go:117] "RemoveContainer" containerID="1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93"
	Aug 12 10:49:23 ha-919901 kubelet[1369]: E0812 10:49:23.890797    1369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-919901?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 12 10:49:23 ha-919901 kubelet[1369]: E0812 10:49:23.890793    1369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-919901.17eaf548cf4cc966  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-919901,UID:37e967e3926409b9b4490fa429d62fdc,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-919901,},FirstTimestamp:2024-08-12 10:45:00.48700247 +0000 UTC m=+466.183349416,LastTimestamp:2024-08-12 10:45:00.48700247 +0000 UTC m=+466.183349416,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:n
il,ReportingController:kubelet,ReportingInstance:ha-919901,}"
	Aug 12 10:49:23 ha-919901 kubelet[1369]: I0812 10:49:23.890914    1369 status_manager.go:853] "Failed to get status for pod" podUID="75e585a5-9ab7-4211-8ed0-dc1d21345883" pod="kube-system/kindnet-k5wz9" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-k5wz9\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:23 ha-919901 kubelet[1369]: E0812 10:49:23.890997    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:25 ha-919901 kubelet[1369]: I0812 10:49:25.431864    1369 scope.go:117] "RemoveContainer" containerID="40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc"
	Aug 12 10:49:26 ha-919901 kubelet[1369]: I0812 10:49:26.432876    1369 scope.go:117] "RemoveContainer" containerID="1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2"
	Aug 12 10:49:26 ha-919901 kubelet[1369]: I0812 10:49:26.962669    1369 status_manager.go:853] "Failed to get status for pod" podUID="1b2498c72d72e1e71b3b9015542989ea" pod="kube-system/kube-controller-manager-ha-919901" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:26 ha-919901 kubelet[1369]: E0812 10:49:26.963002    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:30 ha-919901 kubelet[1369]: E0812 10:49:30.034735    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:30 ha-919901 kubelet[1369]: I0812 10:49:30.034764    1369 status_manager.go:853] "Failed to get status for pod" podUID="7ed243a1-62f6-4ad1-8873-0fbe1756be9e" pod="kube-system/kube-proxy-ftvfl" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftvfl\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 12 10:49:36 ha-919901 kubelet[1369]: I0812 10:49:36.837830    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-pj8gg" podStartSLOduration=560.323305147 podStartE2EDuration="9m22.837782273s" podCreationTimestamp="2024-08-12 10:40:14 +0000 UTC" firstStartedPulling="2024-08-12 10:40:15.143332767 +0000 UTC m=+180.839679699" lastFinishedPulling="2024-08-12 10:40:17.65780989 +0000 UTC m=+183.354156825" observedRunningTime="2024-08-12 10:40:18.273298703 +0000 UTC m=+183.969645655" watchObservedRunningTime="2024-08-12 10:49:36.837782273 +0000 UTC m=+742.534129224"
	Aug 12 10:50:05 ha-919901 kubelet[1369]: I0812 10:50:05.431764    1369 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-919901" podUID="46735446-a563-4870-9509-441ad0cd5c45"
	Aug 12 10:50:05 ha-919901 kubelet[1369]: I0812 10:50:05.455283    1369 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-919901"
	Aug 12 10:50:14 ha-919901 kubelet[1369]: I0812 10:50:14.453076    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-919901" podStartSLOduration=9.452987331 podStartE2EDuration="9.452987331s" podCreationTimestamp="2024-08-12 10:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 10:50:14.451429081 +0000 UTC m=+780.147776034" watchObservedRunningTime="2024-08-12 10:50:14.452987331 +0000 UTC m=+780.149334286"
	Aug 12 10:50:14 ha-919901 kubelet[1369]: E0812 10:50:14.515567    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:50:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:50:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:50:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:50:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 10:51:05.434309   29873 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19409-3774/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-919901 -n ha-919901
helpers_test.go:261: (dbg) Run:  kubectl --context ha-919901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 stop -v=7 --alsologtostderr: exit status 82 (2m0.470574198s)

                                                
                                                
-- stdout --
	* Stopping node "ha-919901-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:51:25.081478   30288 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:51:25.081769   30288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:51:25.081779   30288 out.go:304] Setting ErrFile to fd 2...
	I0812 10:51:25.081784   30288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:51:25.082042   30288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:51:25.082307   30288 out.go:298] Setting JSON to false
	I0812 10:51:25.082412   30288 mustload.go:65] Loading cluster: ha-919901
	I0812 10:51:25.082791   30288 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:51:25.082891   30288 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:51:25.083079   30288 mustload.go:65] Loading cluster: ha-919901
	I0812 10:51:25.083252   30288 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:51:25.083281   30288 stop.go:39] StopHost: ha-919901-m04
	I0812 10:51:25.083728   30288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:51:25.083784   30288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:51:25.099635   30288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
	I0812 10:51:25.100122   30288 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:51:25.100741   30288 main.go:141] libmachine: Using API Version  1
	I0812 10:51:25.100761   30288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:51:25.101183   30288 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:51:25.103564   30288 out.go:177] * Stopping node "ha-919901-m04"  ...
	I0812 10:51:25.104985   30288 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 10:51:25.105015   30288 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:51:25.105276   30288 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 10:51:25.105304   30288 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:51:25.108421   30288 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:51:25.108913   30288 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:50:52 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:51:25.108960   30288 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:51:25.109092   30288 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:51:25.109257   30288 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:51:25.109415   30288 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:51:25.109559   30288 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	I0812 10:51:25.191457   30288 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 10:51:25.244904   30288 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 10:51:25.297173   30288 main.go:141] libmachine: Stopping "ha-919901-m04"...
	I0812 10:51:25.297207   30288 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:51:25.298871   30288 main.go:141] libmachine: (ha-919901-m04) Calling .Stop
	I0812 10:51:25.302218   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 0/120
	I0812 10:51:26.304252   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 1/120
	I0812 10:51:27.305926   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 2/120
	I0812 10:51:28.307633   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 3/120
	I0812 10:51:29.308859   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 4/120
	I0812 10:51:30.310697   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 5/120
	I0812 10:51:31.311907   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 6/120
	I0812 10:51:32.313378   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 7/120
	I0812 10:51:33.314653   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 8/120
	I0812 10:51:34.316016   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 9/120
	I0812 10:51:35.317220   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 10/120
	I0812 10:51:36.318750   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 11/120
	I0812 10:51:37.319862   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 12/120
	I0812 10:51:38.321376   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 13/120
	I0812 10:51:39.322548   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 14/120
	I0812 10:51:40.323918   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 15/120
	I0812 10:51:41.325321   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 16/120
	I0812 10:51:42.327547   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 17/120
	I0812 10:51:43.329365   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 18/120
	I0812 10:51:44.331629   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 19/120
	I0812 10:51:45.333732   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 20/120
	I0812 10:51:46.335267   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 21/120
	I0812 10:51:47.336509   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 22/120
	I0812 10:51:48.337718   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 23/120
	I0812 10:51:49.339378   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 24/120
	I0812 10:51:50.341367   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 25/120
	I0812 10:51:51.342828   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 26/120
	I0812 10:51:52.344758   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 27/120
	I0812 10:51:53.346154   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 28/120
	I0812 10:51:54.347854   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 29/120
	I0812 10:51:55.349329   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 30/120
	I0812 10:51:56.350989   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 31/120
	I0812 10:51:57.352335   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 32/120
	I0812 10:51:58.353789   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 33/120
	I0812 10:51:59.355320   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 34/120
	I0812 10:52:00.357084   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 35/120
	I0812 10:52:01.358726   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 36/120
	I0812 10:52:02.360204   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 37/120
	I0812 10:52:03.361618   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 38/120
	I0812 10:52:04.363586   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 39/120
	I0812 10:52:05.365602   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 40/120
	I0812 10:52:06.366941   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 41/120
	I0812 10:52:07.368517   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 42/120
	I0812 10:52:08.369744   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 43/120
	I0812 10:52:09.371131   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 44/120
	I0812 10:52:10.373075   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 45/120
	I0812 10:52:11.374303   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 46/120
	I0812 10:52:12.375635   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 47/120
	I0812 10:52:13.377052   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 48/120
	I0812 10:52:14.378637   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 49/120
	I0812 10:52:15.380997   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 50/120
	I0812 10:52:16.382431   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 51/120
	I0812 10:52:17.384201   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 52/120
	I0812 10:52:18.385592   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 53/120
	I0812 10:52:19.387134   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 54/120
	I0812 10:52:20.389151   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 55/120
	I0812 10:52:21.391789   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 56/120
	I0812 10:52:22.393418   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 57/120
	I0812 10:52:23.395105   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 58/120
	I0812 10:52:24.396525   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 59/120
	I0812 10:52:25.398961   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 60/120
	I0812 10:52:26.400460   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 61/120
	I0812 10:52:27.401801   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 62/120
	I0812 10:52:28.403949   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 63/120
	I0812 10:52:29.405566   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 64/120
	I0812 10:52:30.407492   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 65/120
	I0812 10:52:31.409106   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 66/120
	I0812 10:52:32.410426   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 67/120
	I0812 10:52:33.412255   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 68/120
	I0812 10:52:34.413400   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 69/120
	I0812 10:52:35.415274   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 70/120
	I0812 10:52:36.416667   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 71/120
	I0812 10:52:37.418022   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 72/120
	I0812 10:52:38.419360   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 73/120
	I0812 10:52:39.420811   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 74/120
	I0812 10:52:40.422306   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 75/120
	I0812 10:52:41.423760   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 76/120
	I0812 10:52:42.425452   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 77/120
	I0812 10:52:43.427313   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 78/120
	I0812 10:52:44.428874   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 79/120
	I0812 10:52:45.431187   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 80/120
	I0812 10:52:46.432858   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 81/120
	I0812 10:52:47.434494   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 82/120
	I0812 10:52:48.436583   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 83/120
	I0812 10:52:49.438253   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 84/120
	I0812 10:52:50.440156   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 85/120
	I0812 10:52:51.441669   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 86/120
	I0812 10:52:52.443182   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 87/120
	I0812 10:52:53.444489   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 88/120
	I0812 10:52:54.446348   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 89/120
	I0812 10:52:55.448922   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 90/120
	I0812 10:52:56.450388   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 91/120
	I0812 10:52:57.451871   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 92/120
	I0812 10:52:58.453236   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 93/120
	I0812 10:52:59.455619   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 94/120
	I0812 10:53:00.457792   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 95/120
	I0812 10:53:01.459301   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 96/120
	I0812 10:53:02.460700   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 97/120
	I0812 10:53:03.462103   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 98/120
	I0812 10:53:04.464240   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 99/120
	I0812 10:53:05.466376   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 100/120
	I0812 10:53:06.467708   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 101/120
	I0812 10:53:07.470106   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 102/120
	I0812 10:53:08.471575   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 103/120
	I0812 10:53:09.473193   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 104/120
	I0812 10:53:10.475356   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 105/120
	I0812 10:53:11.477255   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 106/120
	I0812 10:53:12.479540   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 107/120
	I0812 10:53:13.481245   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 108/120
	I0812 10:53:14.482584   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 109/120
	I0812 10:53:15.484712   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 110/120
	I0812 10:53:16.486380   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 111/120
	I0812 10:53:17.487762   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 112/120
	I0812 10:53:18.489542   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 113/120
	I0812 10:53:19.491446   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 114/120
	I0812 10:53:20.493876   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 115/120
	I0812 10:53:21.495661   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 116/120
	I0812 10:53:22.497333   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 117/120
	I0812 10:53:23.499542   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 118/120
	I0812 10:53:24.500906   30288 main.go:141] libmachine: (ha-919901-m04) Waiting for machine to stop 119/120
	I0812 10:53:25.502060   30288 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 10:53:25.502127   30288 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 10:53:25.504071   30288 out.go:177] 
	W0812 10:53:25.505605   30288 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 10:53:25.505621   30288 out.go:239] * 
	* 
	W0812 10:53:25.507913   30288 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 10:53:25.509582   30288 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-919901 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
E0812 10:53:30.975693   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr: exit status 3 (18.99768856s)

                                                
                                                
-- stdout --
	ha-919901
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-919901-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:53:25.554825   30720 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:53:25.555105   30720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:53:25.555117   30720 out.go:304] Setting ErrFile to fd 2...
	I0812 10:53:25.555121   30720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:53:25.555363   30720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:53:25.555578   30720 out.go:298] Setting JSON to false
	I0812 10:53:25.555602   30720 mustload.go:65] Loading cluster: ha-919901
	I0812 10:53:25.555640   30720 notify.go:220] Checking for updates...
	I0812 10:53:25.556034   30720 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:53:25.556051   30720 status.go:255] checking status of ha-919901 ...
	I0812 10:53:25.556457   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.556524   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.573875   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0812 10:53:25.574301   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.574871   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.574906   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.575252   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.575472   30720 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:53:25.577131   30720 status.go:330] ha-919901 host status = "Running" (err=<nil>)
	I0812 10:53:25.577184   30720 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:53:25.577478   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.577533   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.592270   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I0812 10:53:25.592727   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.593354   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.593398   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.593798   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.594068   30720 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:53:25.597110   30720 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:53:25.597592   30720 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:53:25.597626   30720 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:53:25.597745   30720 host.go:66] Checking if "ha-919901" exists ...
	I0812 10:53:25.598178   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.598228   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.613150   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0812 10:53:25.613588   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.614067   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.614088   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.614369   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.614547   30720 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:53:25.614758   30720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:53:25.614781   30720 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:53:25.617550   30720 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:53:25.618003   30720 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:53:25.618027   30720 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:53:25.618227   30720 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:53:25.618402   30720 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:53:25.618554   30720 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:53:25.618728   30720 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:53:25.706004   30720 ssh_runner.go:195] Run: systemctl --version
	I0812 10:53:25.713257   30720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:53:25.730617   30720 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:53:25.730645   30720 api_server.go:166] Checking apiserver status ...
	I0812 10:53:25.730686   30720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:53:25.745508   30720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5097/cgroup
	W0812 10:53:25.756231   30720 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5097/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:53:25.756323   30720 ssh_runner.go:195] Run: ls
	I0812 10:53:25.761296   30720 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:53:25.765473   30720 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:53:25.765496   30720 status.go:422] ha-919901 apiserver status = Running (err=<nil>)
	I0812 10:53:25.765506   30720 status.go:257] ha-919901 status: &{Name:ha-919901 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:53:25.765524   30720 status.go:255] checking status of ha-919901-m02 ...
	I0812 10:53:25.765868   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.765903   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.781692   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37255
	I0812 10:53:25.782097   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.782569   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.782590   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.782973   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.783220   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetState
	I0812 10:53:25.784957   30720 status.go:330] ha-919901-m02 host status = "Running" (err=<nil>)
	I0812 10:53:25.784984   30720 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:53:25.785279   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.785334   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.801385   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0812 10:53:25.801769   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.802254   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.802288   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.802651   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.802890   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetIP
	I0812 10:53:25.806375   30720 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:53:25.806838   30720 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:48:47 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:53:25.806867   30720 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:53:25.807040   30720 host.go:66] Checking if "ha-919901-m02" exists ...
	I0812 10:53:25.807349   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.807388   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.822254   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0812 10:53:25.822639   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.823086   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.823105   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.823406   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.823580   30720 main.go:141] libmachine: (ha-919901-m02) Calling .DriverName
	I0812 10:53:25.823757   30720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:53:25.823775   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHHostname
	I0812 10:53:25.827006   30720 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:53:25.827419   30720 main.go:141] libmachine: (ha-919901-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:34:35", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:48:47 +0000 UTC Type:0 Mac:52:54:00:aa:34:35 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-919901-m02 Clientid:01:52:54:00:aa:34:35}
	I0812 10:53:25.827435   30720 main.go:141] libmachine: (ha-919901-m02) DBG | domain ha-919901-m02 has defined IP address 192.168.39.139 and MAC address 52:54:00:aa:34:35 in network mk-ha-919901
	I0812 10:53:25.827684   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHPort
	I0812 10:53:25.827896   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHKeyPath
	I0812 10:53:25.828065   30720 main.go:141] libmachine: (ha-919901-m02) Calling .GetSSHUsername
	I0812 10:53:25.828244   30720 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m02/id_rsa Username:docker}
	I0812 10:53:25.910190   30720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:53:25.926872   30720 kubeconfig.go:125] found "ha-919901" server: "https://192.168.39.254:8443"
	I0812 10:53:25.926900   30720 api_server.go:166] Checking apiserver status ...
	I0812 10:53:25.926930   30720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:53:25.942800   30720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	W0812 10:53:25.953784   30720 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:53:25.953838   30720 ssh_runner.go:195] Run: ls
	I0812 10:53:25.958910   30720 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:53:25.963327   30720 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:53:25.963367   30720 status.go:422] ha-919901-m02 apiserver status = Running (err=<nil>)
	I0812 10:53:25.963375   30720 status.go:257] ha-919901-m02 status: &{Name:ha-919901-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:53:25.963389   30720 status.go:255] checking status of ha-919901-m04 ...
	I0812 10:53:25.963673   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.963704   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.978758   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0812 10:53:25.979255   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.979749   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.979769   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.980050   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.980205   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetState
	I0812 10:53:25.981802   30720 status.go:330] ha-919901-m04 host status = "Running" (err=<nil>)
	I0812 10:53:25.981816   30720 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:53:25.982146   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:25.982192   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:25.997620   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0812 10:53:25.998155   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:25.998695   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:25.998719   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:25.999000   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:25.999205   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetIP
	I0812 10:53:26.001980   30720 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:53:26.002550   30720 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:50:52 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:53:26.002579   30720 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:53:26.002747   30720 host.go:66] Checking if "ha-919901-m04" exists ...
	I0812 10:53:26.003039   30720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:53:26.003072   30720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:53:26.019887   30720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0812 10:53:26.020359   30720 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:53:26.020860   30720 main.go:141] libmachine: Using API Version  1
	I0812 10:53:26.020901   30720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:53:26.021194   30720 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:53:26.021406   30720 main.go:141] libmachine: (ha-919901-m04) Calling .DriverName
	I0812 10:53:26.021619   30720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:53:26.021636   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHHostname
	I0812 10:53:26.024650   30720 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:53:26.025048   30720 main.go:141] libmachine: (ha-919901-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:f6:73", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:50:52 +0000 UTC Type:0 Mac:52:54:00:4d:f6:73 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-919901-m04 Clientid:01:52:54:00:4d:f6:73}
	I0812 10:53:26.025084   30720 main.go:141] libmachine: (ha-919901-m04) DBG | domain ha-919901-m04 has defined IP address 192.168.39.218 and MAC address 52:54:00:4d:f6:73 in network mk-ha-919901
	I0812 10:53:26.025209   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHPort
	I0812 10:53:26.025419   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHKeyPath
	I0812 10:53:26.025589   30720 main.go:141] libmachine: (ha-919901-m04) Calling .GetSSHUsername
	I0812 10:53:26.025750   30720 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901-m04/id_rsa Username:docker}
	W0812 10:53:44.509111   30720 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.218:22: connect: no route to host
	W0812 10:53:44.509212   30720 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host
	E0812 10:53:44.509226   30720 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host
	I0812 10:53:44.509233   30720 status.go:257] ha-919901-m04 status: &{Name:ha-919901-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0812 10:53:44.509250   30720 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-919901 -n ha-919901
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-919901 logs -n 25: (1.771902883s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m04 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp testdata/cp-test.txt                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901:/home/docker/cp-test_ha-919901-m04_ha-919901.txt                       |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901 sudo cat                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901.txt                                 |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m02:/home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m02 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m03:/home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n                                                                 | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | ha-919901-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-919901 ssh -n ha-919901-m03 sudo cat                                          | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC | 12 Aug 24 10:41 UTC |
	|         | /home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-919901 node stop m02 -v=7                                                     | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-919901 node start m02 -v=7                                                    | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-919901 -v=7                                                           | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-919901 -v=7                                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-919901 --wait=true -v=7                                                    | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:46 UTC | 12 Aug 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-919901                                                                | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:51 UTC |                     |
	| node    | ha-919901 node delete m03 -v=7                                                   | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:51 UTC | 12 Aug 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-919901 stop -v=7                                                              | ha-919901 | jenkins | v1.33.1 | 12 Aug 24 10:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:46:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:46:54.647303   28520 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:46:54.647590   28520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:54.647609   28520 out.go:304] Setting ErrFile to fd 2...
	I0812 10:46:54.647616   28520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:54.647853   28520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:46:54.648452   28520 out.go:298] Setting JSON to false
	I0812 10:46:54.649433   28520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1756,"bootTime":1723457859,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:46:54.649493   28520 start.go:139] virtualization: kvm guest
	I0812 10:46:54.651834   28520 out.go:177] * [ha-919901] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:46:54.653195   28520 notify.go:220] Checking for updates...
	I0812 10:46:54.653229   28520 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:46:54.654788   28520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:46:54.656530   28520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:46:54.658000   28520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:46:54.659568   28520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:46:54.661268   28520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:46:54.663351   28520 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:46:54.663458   28520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:46:54.663921   28520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:46:54.663973   28520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:54.679249   28520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0812 10:46:54.679709   28520 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:54.680222   28520 main.go:141] libmachine: Using API Version  1
	I0812 10:46:54.680250   28520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:54.680639   28520 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:54.680927   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.719607   28520 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 10:46:54.721188   28520 start.go:297] selected driver: kvm2
	I0812 10:46:54.721211   28520 start.go:901] validating driver "kvm2" against &{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:46:54.721398   28520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:46:54.721757   28520 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:46:54.721855   28520 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:46:54.737988   28520 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:46:54.738726   28520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:46:54.738801   28520 cni.go:84] Creating CNI manager for ""
	I0812 10:46:54.738818   28520 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 10:46:54.738886   28520 start.go:340] cluster config:
	{Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:46:54.739030   28520 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:46:54.740898   28520 out.go:177] * Starting "ha-919901" primary control-plane node in "ha-919901" cluster
	I0812 10:46:54.742273   28520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:46:54.742354   28520 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:46:54.742370   28520 cache.go:56] Caching tarball of preloaded images
	I0812 10:46:54.742474   28520 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 10:46:54.742488   28520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 10:46:54.742658   28520 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/config.json ...
	I0812 10:46:54.742949   28520 start.go:360] acquireMachinesLock for ha-919901: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:46:54.743026   28520 start.go:364] duration metric: took 55.667µs to acquireMachinesLock for "ha-919901"
	I0812 10:46:54.743048   28520 start.go:96] Skipping create...Using existing machine configuration
	I0812 10:46:54.743056   28520 fix.go:54] fixHost starting: 
	I0812 10:46:54.743384   28520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:46:54.743434   28520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:54.758432   28520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I0812 10:46:54.758851   28520 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:54.759512   28520 main.go:141] libmachine: Using API Version  1
	I0812 10:46:54.759532   28520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:54.759901   28520 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:54.760119   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.760266   28520 main.go:141] libmachine: (ha-919901) Calling .GetState
	I0812 10:46:54.761855   28520 fix.go:112] recreateIfNeeded on ha-919901: state=Running err=<nil>
	W0812 10:46:54.761871   28520 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 10:46:54.763895   28520 out.go:177] * Updating the running kvm2 "ha-919901" VM ...
	I0812 10:46:54.765301   28520 machine.go:94] provisionDockerMachine start ...
	I0812 10:46:54.765330   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:46:54.765577   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:54.768339   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.768792   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:54.768818   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.768997   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:54.769194   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.769362   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.769532   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:54.769718   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:54.769908   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:54.769919   28520 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 10:46:54.881894   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:46:54.881918   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:54.882149   28520 buildroot.go:166] provisioning hostname "ha-919901"
	I0812 10:46:54.882170   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:54.882393   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:54.885299   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.885829   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:54.885856   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:54.886051   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:54.886287   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.886467   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:54.886596   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:54.886758   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:54.886926   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:54.886938   28520 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-919901 && echo "ha-919901" | sudo tee /etc/hostname
	I0812 10:46:55.015696   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-919901
	
	I0812 10:46:55.015721   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.018782   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.019262   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.019283   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.019514   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.019731   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.019904   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.020033   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.020201   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:55.020372   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:55.020387   28520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-919901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-919901/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-919901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:46:55.138466   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:46:55.138503   28520 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 10:46:55.138520   28520 buildroot.go:174] setting up certificates
	I0812 10:46:55.138528   28520 provision.go:84] configureAuth start
	I0812 10:46:55.138536   28520 main.go:141] libmachine: (ha-919901) Calling .GetMachineName
	I0812 10:46:55.138808   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:46:55.141593   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.141932   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.141952   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.142072   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.144412   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.144837   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.144860   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.145016   28520 provision.go:143] copyHostCerts
	I0812 10:46:55.145057   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:46:55.145093   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 10:46:55.145102   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 10:46:55.145173   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 10:46:55.145264   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:46:55.145281   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 10:46:55.145285   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 10:46:55.145309   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 10:46:55.145363   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:46:55.145379   28520 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 10:46:55.145385   28520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 10:46:55.145408   28520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 10:46:55.145475   28520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.ha-919901 san=[127.0.0.1 192.168.39.5 ha-919901 localhost minikube]
	I0812 10:46:55.466340   28520 provision.go:177] copyRemoteCerts
	I0812 10:46:55.466412   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:46:55.466439   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.469148   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.469526   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.469569   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.469692   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.469935   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.470166   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.470387   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:46:55.556143   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 10:46:55.556225   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:46:55.585239   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 10:46:55.585304   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 10:46:55.614632   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 10:46:55.614716   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0812 10:46:55.641009   28520 provision.go:87] duration metric: took 502.4708ms to configureAuth
	I0812 10:46:55.641036   28520 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:46:55.641269   28520 config.go:182] Loaded profile config "ha-919901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:46:55.641356   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:46:55.643952   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.644448   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:46:55.644486   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:46:55.644666   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:46:55.644883   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.645040   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:46:55.645186   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:46:55.645331   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:46:55.645518   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:46:55.645539   28520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 10:48:26.551171   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 10:48:26.551195   28520 machine.go:97] duration metric: took 1m31.785877087s to provisionDockerMachine
	I0812 10:48:26.551206   28520 start.go:293] postStartSetup for "ha-919901" (driver="kvm2")
	I0812 10:48:26.551223   28520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:48:26.551236   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.551612   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:48:26.551648   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.554801   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.555244   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.555270   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.555542   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.555785   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.555978   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.556117   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:26.645400   28520 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:48:26.649890   28520 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:48:26.649913   28520 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 10:48:26.649972   28520 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 10:48:26.650066   28520 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 10:48:26.650077   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 10:48:26.650155   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 10:48:26.659882   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:48:26.687091   28520 start.go:296] duration metric: took 135.864438ms for postStartSetup
	I0812 10:48:26.687149   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.687437   28520 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0812 10:48:26.687460   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.690115   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.690471   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.690498   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.690653   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.690948   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.691179   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.691415   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	W0812 10:48:26.775490   28520 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0812 10:48:26.775519   28520 fix.go:56] duration metric: took 1m32.03246339s for fixHost
	I0812 10:48:26.775541   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.778127   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.778484   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.778507   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.778677   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.778897   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.779056   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.779175   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.779321   28520 main.go:141] libmachine: Using SSH client type: native
	I0812 10:48:26.779484   28520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0812 10:48:26.779494   28520 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:48:26.889744   28520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723459706.854157384
	
	I0812 10:48:26.889769   28520 fix.go:216] guest clock: 1723459706.854157384
	I0812 10:48:26.889776   28520 fix.go:229] Guest: 2024-08-12 10:48:26.854157384 +0000 UTC Remote: 2024-08-12 10:48:26.775526324 +0000 UTC m=+92.165330545 (delta=78.63106ms)
	I0812 10:48:26.889794   28520 fix.go:200] guest clock delta is within tolerance: 78.63106ms
	I0812 10:48:26.889799   28520 start.go:83] releasing machines lock for "ha-919901", held for 1m32.146762409s
	I0812 10:48:26.889817   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.890098   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:48:26.892737   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.893183   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.893216   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.893455   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.893974   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.894206   28520 main.go:141] libmachine: (ha-919901) Calling .DriverName
	I0812 10:48:26.894295   28520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:48:26.894343   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.894445   28520 ssh_runner.go:195] Run: cat /version.json
	I0812 10:48:26.894463   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHHostname
	I0812 10:48:26.897068   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897474   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.897502   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897521   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.897644   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.897802   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.897965   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.897988   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:26.898012   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:26.898146   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:26.898168   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHPort
	I0812 10:48:26.898313   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHKeyPath
	I0812 10:48:26.898467   28520 main.go:141] libmachine: (ha-919901) Calling .GetSSHUsername
	I0812 10:48:26.898610   28520 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/ha-919901/id_rsa Username:docker}
	I0812 10:48:27.012954   28520 ssh_runner.go:195] Run: systemctl --version
	I0812 10:48:27.019846   28520 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 10:48:27.181931   28520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:48:27.188435   28520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:48:27.188510   28520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:48:27.197607   28520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 10:48:27.197630   28520 start.go:495] detecting cgroup driver to use...
	I0812 10:48:27.197689   28520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:48:27.214884   28520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:48:27.229268   28520 docker.go:217] disabling cri-docker service (if available) ...
	I0812 10:48:27.229374   28520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 10:48:27.243258   28520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 10:48:27.256804   28520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 10:48:27.405651   28520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 10:48:27.552354   28520 docker.go:233] disabling docker service ...
	I0812 10:48:27.552437   28520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 10:48:27.569174   28520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 10:48:27.583125   28520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 10:48:27.727277   28520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 10:48:27.874360   28520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 10:48:27.888390   28520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:48:27.909232   28520 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 10:48:27.909284   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.919808   28520 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 10:48:27.919881   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.930266   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.940829   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.951425   28520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:48:27.962304   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.973178   28520 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.984696   28520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 10:48:27.995115   28520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:48:28.004730   28520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:48:28.014083   28520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:48:28.159247   28520 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 10:48:35.567027   28520 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.40774347s)
	I0812 10:48:35.567055   28520 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 10:48:35.567123   28520 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 10:48:35.571931   28520 start.go:563] Will wait 60s for crictl version
	I0812 10:48:35.571999   28520 ssh_runner.go:195] Run: which crictl
	I0812 10:48:35.576285   28520 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:48:35.616512   28520 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 10:48:35.616589   28520 ssh_runner.go:195] Run: crio --version
	I0812 10:48:35.646316   28520 ssh_runner.go:195] Run: crio --version
	I0812 10:48:35.676080   28520 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 10:48:35.677507   28520 main.go:141] libmachine: (ha-919901) Calling .GetIP
	I0812 10:48:35.680220   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:35.680690   28520 main.go:141] libmachine: (ha-919901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:40:2a", ip: ""} in network mk-ha-919901: {Iface:virbr1 ExpiryTime:2024-08-12 11:36:50 +0000 UTC Type:0 Mac:52:54:00:8b:40:2a Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-919901 Clientid:01:52:54:00:8b:40:2a}
	I0812 10:48:35.680718   28520 main.go:141] libmachine: (ha-919901) DBG | domain ha-919901 has defined IP address 192.168.39.5 and MAC address 52:54:00:8b:40:2a in network mk-ha-919901
	I0812 10:48:35.681012   28520 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:48:35.685887   28520 kubeadm.go:883] updating cluster {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:48:35.686032   28520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:48:35.686076   28520 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:48:35.729838   28520 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:48:35.729862   28520 crio.go:433] Images already preloaded, skipping extraction
	I0812 10:48:35.729906   28520 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 10:48:35.766383   28520 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 10:48:35.766406   28520 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:48:35.766414   28520 kubeadm.go:934] updating node { 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0812 10:48:35.766504   28520 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-919901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:48:35.766569   28520 ssh_runner.go:195] Run: crio config
	I0812 10:48:35.816179   28520 cni.go:84] Creating CNI manager for ""
	I0812 10:48:35.816200   28520 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 10:48:35.816211   28520 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:48:35.816245   28520 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-919901 NodeName:ha-919901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:48:35.816413   28520 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-919901"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:48:35.816436   28520 kube-vip.go:115] generating kube-vip config ...
	I0812 10:48:35.816485   28520 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 10:48:35.827685   28520 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 10:48:35.827806   28520 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 10:48:35.827874   28520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:48:35.837344   28520 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:48:35.837424   28520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 10:48:35.846700   28520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0812 10:48:35.863467   28520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:48:35.880185   28520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0812 10:48:35.896905   28520 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 10:48:35.913728   28520 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 10:48:35.918556   28520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:48:36.063675   28520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:48:36.078652   28520 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901 for IP: 192.168.39.5
	I0812 10:48:36.078679   28520 certs.go:194] generating shared ca certs ...
	I0812 10:48:36.078698   28520 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.078871   28520 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 10:48:36.078927   28520 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 10:48:36.078939   28520 certs.go:256] generating profile certs ...
	I0812 10:48:36.079048   28520 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/client.key
	I0812 10:48:36.079083   28520 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da
	I0812 10:48:36.079116   28520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.139 192.168.39.195 192.168.39.254]
	I0812 10:48:36.322084   28520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da ...
	I0812 10:48:36.322116   28520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da: {Name:mk95510ba6d23b1a8b9a440efe74085f486357b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.322281   28520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da ...
	I0812 10:48:36.322292   28520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da: {Name:mk5a5edb5733fe7a10dc1627b88ff9518edb7b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:48:36.322365   28520 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt.73ff17da -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt
	I0812 10:48:36.322526   28520 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key.73ff17da -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key
	I0812 10:48:36.322646   28520 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key
	I0812 10:48:36.322663   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 10:48:36.322675   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 10:48:36.322717   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 10:48:36.322737   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 10:48:36.322749   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 10:48:36.322762   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 10:48:36.322774   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 10:48:36.322786   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 10:48:36.322829   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 10:48:36.322855   28520 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 10:48:36.322865   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:48:36.322887   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 10:48:36.322907   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:48:36.322928   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 10:48:36.322963   28520 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 10:48:36.322989   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.323003   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.323015   28520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.323581   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:48:36.349235   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:48:36.372664   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:48:36.396478   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:48:36.420885   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 10:48:36.446496   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 10:48:36.470700   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:48:36.494793   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/ha-919901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:48:36.519235   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:48:36.543049   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 10:48:36.567499   28520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 10:48:36.591458   28520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:48:36.608293   28520 ssh_runner.go:195] Run: openssl version
	I0812 10:48:36.614417   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:48:36.625750   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.630462   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.630526   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:48:36.636215   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:48:36.646197   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 10:48:36.657324   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.662003   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.662072   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 10:48:36.667650   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 10:48:36.677606   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 10:48:36.689338   28520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.693804   28520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.693878   28520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 10:48:36.699797   28520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:48:36.711165   28520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:48:36.715948   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 10:48:36.722003   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 10:48:36.727835   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 10:48:36.733758   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 10:48:36.739899   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 10:48:36.745475   28520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 10:48:36.751494   28520 kubeadm.go:392] StartCluster: {Name:ha-919901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-919901 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.218 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:48:36.751643   28520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 10:48:36.751698   28520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 10:48:36.790666   28520 cri.go:89] found id: "a8766116f2e58d7532c947c56192d66b7cdc96b2954f05c3a7e3999a645c5edc"
	I0812 10:48:36.790693   28520 cri.go:89] found id: "10b588fc239e3d3313ca309e1f13be69d19663d8914ac6cbccaa255b1f5a1192"
	I0812 10:48:36.790699   28520 cri.go:89] found id: "7a668d0f8e974a7ccd5a60e3be4f4d50b878d943bc7a9e3da000080ca527cd67"
	I0812 10:48:36.790704   28520 cri.go:89] found id: "7fed01d7160560309c4ee6b8b6f4ee49e2169be938b7bd960d22a6e413d73e4f"
	I0812 10:48:36.790708   28520 cri.go:89] found id: "6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8"
	I0812 10:48:36.790713   28520 cri.go:89] found id: "ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b"
	I0812 10:48:36.790717   28520 cri.go:89] found id: "4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf"
	I0812 10:48:36.790722   28520 cri.go:89] found id: "7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f"
	I0812 10:48:36.790726   28520 cri.go:89] found id: "52237e0a859ca116f637782e69b8c477b172bcffe7dd962dcf7401651171c5ed"
	I0812 10:48:36.790733   28520 cri.go:89] found id: "2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf"
	I0812 10:48:36.790742   28520 cri.go:89] found id: "0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14"
	I0812 10:48:36.790747   28520 cri.go:89] found id: "2b624c8fe2100a8281fab931d59941e13a68b3367ee7a36ece28d6087e8d1a6f"
	I0812 10:48:36.790751   28520 cri.go:89] found id: "e76a506154546c22ce7972ea95053e0254f2cc2e30d7e1e31a666f212969115e"
	I0812 10:48:36.790755   28520 cri.go:89] found id: ""
	I0812 10:48:36.790810   28520 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.137628458Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-pj8gg,Uid:b9a02941-b2f3-4ffe-bdca-07a7322887b1,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459755149522712,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:40:14.579730104Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-919901,Uid:82b0f1622d3c68c0a51defdcc0ae67a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723459733180125005,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{kubernetes.io/config.hash: 82b0f1622d3c68c0a51defdcc0ae67a3,kubernetes.io/config.seen: 2024-08-12T10:48:35.879162711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&PodSandboxMetadata{Name:kube-proxy-ftvfl,Uid:7ed243a1-62f6-4ad1-8873-0fbe1756be9e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721496721508,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/
config.seen: 2024-08-12T10:37:27.511903591Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&PodSandboxMetadata{Name:kindnet-k5wz9,Uid:75e585a5-9ab7-4211-8ed0-dc1d21345883,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721463407252,Labels:map[string]string{app: kindnet,controller-revision-hash: 7c6d997646,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:27.521545132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-919901,Uid:1b2498c72d72e1e71b3b9015542989ea,Namespace:kube-system,Attempt:1,},State:SAN
DBOX_READY,CreatedAt:1723459721462709546,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1b2498c72d72e1e71b3b9015542989ea,kubernetes.io/config.seen: 2024-08-12T10:37:14.416276975Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&PodSandboxMetadata{Name:etcd-ha-919901,Uid:148b0299f3f1839b42d2ab8e65cc0f2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721461726095,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,tier: control-plane,},Annotations:map[string]stri
ng{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.5:2379,kubernetes.io/config.hash: 148b0299f3f1839b42d2ab8e65cc0f2a,kubernetes.io/config.seen: 2024-08-12T10:37:14.416274814Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wstd4,Uid:53bfc998-8d70-4dc5-b0f9-a78610183a2b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721419454059,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:44.369996799Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&PodSandboxMetadata{
Name:kube-scheduler-ha-919901,Uid:29e4b07d53879d3ec685dd71228335fe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721377977235,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29e4b07d53879d3ec685dd71228335fe,kubernetes.io/config.seen: 2024-08-12T10:37:14.416277805Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-919901,Uid:37e967e3926409b9b4490fa429d62fdc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721374486220,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.5:8443,kubernetes.io/config.hash: 37e967e3926409b9b4490fa429d62fdc,kubernetes.io/config.seen: 2024-08-12T10:37:14.416276009Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6d697e68-33fa-4784-90d8-0561d3fff6a8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459721353324828,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"
v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T10:37:44.369283237Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc7cl,Uid:92f21234-d4e8-4f0e-a8e5-356db2297843,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723459716918102103,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:44.359557743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-pj8gg,Uid:b9a02941-b2f3-4ffe-bdca-07a7322887b1,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459214896942218,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:40:14.579730104Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wstd4,Uid:53bfc998-8d70-4dc5-b0f9-a78610183a2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459064979390014,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:44.369996799Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc7cl,Uid:92f21234-d4e8-4f0e-a8e5-356db2297843,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459064966730412,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:44.359557743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&PodSandboxMetadata{Name:kindnet-k5wz9,Uid:75e585a5-9ab7-4211-8ed0-dc1d21345883,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459048130501804,Labels:map[string]string{app: kindnet,controller-revision-hash: 7c6d997646,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:27.521545132Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&PodSandboxMetadata{Name:kube-proxy-ftvfl,Uid:7ed243a1-62f6-4ad1-8873-0fbe1756be9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459047825763348,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T10:37:27.511903591Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-919901,Uid:29e4b07d53879d3ec685dd71228335fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459027821988246,Labels:map[string]string{component: kube-scheduler,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29e4b07d53879d3ec685dd71228335fe,kubernetes.io/config.seen: 2024-08-12T10:37:07.326412030Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&PodSandboxMetadata{Name:etcd-ha-919901,Uid:148b0299f3f1839b42d2ab8e65cc0f2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723459027784760494,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.5:2379,kubernetes.io/config.hash: 148b0299f3f183
9b42d2ab8e65cc0f2a,kubernetes.io/config.seen: 2024-08-12T10:37:07.326403377Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=46043216-5931-4dc4-8672-4f566edc980d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.139404026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f01b445-dc42-4078-b5fb-93e83df3c5bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.139476167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f01b445-dc42-4078-b5fb-93e83df3c5bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.139944490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f01b445-dc42-4078-b5fb-93e83df3c5bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.192170083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a30f9bd-66dd-4e7c-ab46-4e969e2dcd6d name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.192364254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a30f9bd-66dd-4e7c-ab46-4e969e2dcd6d name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.193539713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10f46969-dc7a-417d-b4eb-be3e0dc35f88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.193996267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723460025193971822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10f46969-dc7a-417d-b4eb-be3e0dc35f88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.194652831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=934a9bf2-90ba-4ff2-9940-8b0847096d73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.194721333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=934a9bf2-90ba-4ff2-9940-8b0847096d73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.195148805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=934a9bf2-90ba-4ff2-9940-8b0847096d73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.243772699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e072b358-fef7-459f-907f-cb05abaf2153 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.243862773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e072b358-fef7-459f-907f-cb05abaf2153 name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.245170305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9878877-4c08-4d52-86f5-e9d49766f859 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.245688164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723460025245662792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9878877-4c08-4d52-86f5-e9d49766f859 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.246288760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7538379-27ba-4103-aa32-c2bfdbc2b6b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.246365588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7538379-27ba-4103-aa32-c2bfdbc2b6b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.246766228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7538379-27ba-4103-aa32-c2bfdbc2b6b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.291314719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c593e5b-02c9-4ff5-91c5-2898f87ad81c name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.291389361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c593e5b-02c9-4ff5-91c5-2898f87ad81c name=/runtime.v1.RuntimeService/Version
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.292758611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3aeaada2-914f-4309-b25f-7b423f5910e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.293201696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723460025293179054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3aeaada2-914f-4309-b25f-7b423f5910e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.293861784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0282052-594f-46f5-b593-70337c3f7d7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.293932466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0282052-594f-46f5-b593-70337c3f7d7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 10:53:45 ha-919901 crio[3808]: time="2024-08-12 10:53:45.294416645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1feef8d0a7509a3143f3435dbab4d706c2a3b37b5a098b71fe9c4ed101579303,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723459766461275399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723459765444925550,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723459762452923199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a975906041de1c0d96f3482a8837100f6c729585f87ca832b98cf7a9f71edc,PodSandboxId:af3930beb96f25570de66cfa8952d80d38d9f0a0a2a80f6dc13c475062fab782,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723459755302124395,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annotations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2fc5ccdb449f41c11d07f7a4e5f0213f29756ab76385938d7d4be97b5cb121,PodSandboxId:f299608085a7359bb3ee02d4f12dbdf326b63649c5108f0c5a39af1e83398c66,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723459733285047880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b0f1622d3c68c0a51defdcc0ae67a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff,PodSandboxId:27c3c8acb92734404a0cd004ccd0c8b0c860547b5d72a17e4152fbee9b56e59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723459722141629124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee,PodSandboxId:6da31f89d702cc43c1ee7ce2d665857288109222a66679b4cbcef3fbafef0ad7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723459722102442243,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1766e0cc1e04cbf0b71e2ea90c9155d15810d451c0d3d7eba275dd2bc5f17ae2,PodSandboxId:b75bef0d429e552365caad1acf173c2947f316941914919974b6b9e62ff8b1d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723459721772325198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d697e68-33fa-4784-90d8-0561d3fff6a8,},Annotations:map[string]string{io.kubernetes.container.hash: 4520a48c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e54d9
554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180,PodSandboxId:66d278adbf4b55ffb36576211a5c3ba25b269a1e237662e92d9788f67d2365ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723459721901879539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fe
a5df5,PodSandboxId:c588fd38b169b04dc89c2057742aef16a4b575345f9dfef462d8bebae9746711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459721867253840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kubernetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93,PodSandboxId:b50fc8ef65be2259d321aba349f98c459ce88e2978e02c7d7453416d5fce1a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723459721859873479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2498c72d72e1e71b3b9015542989ea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4,PodSandboxId:34445bb6eb65cf7c05d06cb43e6f84c241c458dbeefabbd6a15e9e33ca49e151,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723459721715881844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc,PodSandboxId:b77024a4392f2dddcb6ccb1abc30040538c3db974a8f3914c00e2c9eb23dd930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723459721619677152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e967e3926409b9b4490fa429d62fdc,},Annotations:map[string]string{io.kubernetes.container.hash: a14afb07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e,PodSandboxId:9201197c1ac54eaf6a8c84ccaa8d2d8589790723cd2a7be14900c7a9bfd334ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723459717045196445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8542d2fe34f2b44c191e084e1f85f0eb7b1a0b1a39d63b3fd3ba0510027c5668,PodSandboxId:40dfaa461230a2e373c966c081e58e06b69e54e472764d0338e67823d0a759f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723459217676022810,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pj8gg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a02941-b2f3-4ffe-bdca-07a7322887b1,},Annot
ations:map[string]string{io.kubernetes.container.hash: cbfab74f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8,PodSandboxId:7ee3eb4b0b10eb268d84ccf4b86511e118adba8ff72671265ad6936653c79876,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065194039855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wstd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bfc998-8d70-4dc5-b0f9-a78610183a2b,},Annotations:map[string]string{io.kube
rnetes.container.hash: de677b7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b,PodSandboxId:a88f690225d3f6e35d63e64a591930de23dbdadec994a1d50c2a1302aaf9567d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723459065148082153,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rc7cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f21234-d4e8-4f0e-a8e5-356db2297843,},Annotations:map[string]string{io.kubernetes.container.hash: f28abbb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf,PodSandboxId:2abd5fefba6f34c44814b969b1a9bc9f1aa44fea6ad95145b733cb0c7e82bff9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723459052942878767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k5wz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e585a5-9ab7-4211-8ed0-dc1d21345883,},Annotations:map[string]string{io.kubernetes.container.hash: 44f25b1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f,PodSandboxId:b7d28551c45a6f766983f4c1618f75eefcc2aa63991e48279d50e948648a119a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723459048117998507,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftvfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed243a1-62f6-4ad1-8873-0fbe1756be9e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e160f3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf,PodSandboxId:06243d97384e5ebbf9480bd664848a21f3329d17103ac89c25d646cd5c6af2ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723459028074909889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e4b07d53879d3ec685dd71228335fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14,PodSandboxId:fae04d253fe0c5b3c2344ea51b9e34bb9ffbb9a35549113dbbdf82302161c5db,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723459028024477228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-919901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148b0299f3f1839b42d2ab8e65cc0f2a,},Annotations:map[string]string{io.kubernetes.container.hash: 1194859b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0282052-594f-46f5-b593-70337c3f7d7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1feef8d0a7509       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   b75bef0d429e5       storage-provisioner
	9e1fc5e390923       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   b77024a4392f2       kube-apiserver-ha-919901
	75c65bbec166c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   b50fc8ef65be2       kube-controller-manager-ha-919901
	02a975906041d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   af3930beb96f2       busybox-fc5497c4f-pj8gg
	8a2fc5ccdb449       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   f299608085a73       kube-vip-ha-919901
	d6976ec7a56e8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   27c3c8acb9273       kube-proxy-ftvfl
	bc6462a604f64       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      5 minutes ago       Running             kindnet-cni               1                   6da31f89d702c       kindnet-k5wz9
	819e54d9554ed       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   66d278adbf4b5       etcd-ha-919901
	ee56da3827469       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c588fd38b169b       coredns-7db6d8ff4d-wstd4
	1f2b335c58f4e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   b50fc8ef65be2       kube-controller-manager-ha-919901
	1766e0cc1e04c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   b75bef0d429e5       storage-provisioner
	fc2643d16d41c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   34445bb6eb65c       kube-scheduler-ha-919901
	40a98a9a1e936       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   b77024a4392f2       kube-apiserver-ha-919901
	87dc2b222be5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9201197c1ac54       coredns-7db6d8ff4d-rc7cl
	8542d2fe34f2b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   40dfaa461230a       busybox-fc5497c4f-pj8gg
	6d0c6b246369b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   7ee3eb4b0b10e       coredns-7db6d8ff4d-wstd4
	ec7364f484b0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   a88f690225d3f       coredns-7db6d8ff4d-rc7cl
	4d3c2394cc8cd       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    16 minutes ago      Exited              kindnet-cni               0                   2abd5fefba6f3       kindnet-k5wz9
	7cd3e13fb2b3b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   b7d28551c45a6       kube-proxy-ftvfl
	2af78571207ce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   06243d97384e5       kube-scheduler-ha-919901
	0c30877cfdcca       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   fae04d253fe0c       etcd-ha-919901
	
	
	==> coredns [6d0c6b246369b77b455404b70fd88b94e4da26cbd201a7ae30b74cb7039d25b8] <==
	[INFO] 10.244.1.2:41656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234118s
	[INFO] 10.244.1.2:37332 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00027744s
	[INFO] 10.244.1.2:40223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010736666s
	[INFO] 10.244.0.4:34313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099644s
	[INFO] 10.244.0.4:42226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013952s
	[INFO] 10.244.0.4:57222 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017573s
	[INFO] 10.244.0.4:58894 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088282s
	[INFO] 10.244.2.2:46163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143718s
	[INFO] 10.244.2.2:51332 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158612s
	[INFO] 10.244.2.2:38508 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102467s
	[INFO] 10.244.1.2:36638 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127128s
	[INFO] 10.244.1.2:48634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196174s
	[INFO] 10.244.1.2:34717 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000153611s
	[INFO] 10.244.1.2:59132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121069s
	[INFO] 10.244.0.4:52263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018165s
	[INFO] 10.244.0.4:33949 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137401s
	[INFO] 10.244.0.4:50775 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059871s
	[INFO] 10.244.2.2:49015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152696s
	[INFO] 10.244.2.2:39997 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159415s
	[INFO] 10.244.2.2:33769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094484s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [87dc2b222be5fad0df75edbfc5ffee9c05f568c78aaefea95c0fcf09ce77244e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1167407080]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:52.406) (total time: 10001ms):
	Trace[1167407080]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:49:02.408)
	Trace[1167407080]: [10.001690288s] [10.001690288s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1346001048]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:52.422) (total time: 10001ms):
	Trace[1346001048]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:49:02.424)
	Trace[1346001048]: [10.001780624s] [10.001780624s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ec7364f484b0d7fa35ace0124b8dc922a617b051b4ef8dc8d5513b3c9f49bf2b] <==
	[INFO] 10.244.0.4:36852 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079487s
	[INFO] 10.244.2.2:51413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001945024s
	[INFO] 10.244.2.2:47991 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079163s
	[INFO] 10.244.2.2:37019 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001502663s
	[INFO] 10.244.2.2:54793 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077144s
	[INFO] 10.244.2.2:58782 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056455s
	[INFO] 10.244.1.2:54292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137507s
	[INFO] 10.244.1.2:59115 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089729s
	[INFO] 10.244.0.4:40377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115376s
	[INFO] 10.244.0.4:56017 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088959s
	[INFO] 10.244.0.4:52411 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057997s
	[INFO] 10.244.0.4:46999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005214s
	[INFO] 10.244.2.2:42855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167607s
	[INFO] 10.244.2.2:43154 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117622s
	[INFO] 10.244.2.2:33056 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087079s
	[INFO] 10.244.2.2:52436 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114815s
	[INFO] 10.244.1.2:57727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129686s
	[INFO] 10.244.1.2:60878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018786s
	[INFO] 10.244.0.4:47644 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114448s
	[INFO] 10.244.2.2:38930 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159722s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee56da3827469ffbae6d2e0fafc2a824aa82ce08fc5374d33336e5201fea5df5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[814258677]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:53.504) (total time: 10299ms):
	Trace[814258677]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer 10299ms (10:49:03.803)
	Trace[814258677]: [10.299732562s] [10.299732562s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1920876370]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 10:48:53.845) (total time: 14045ms):
	Trace[1920876370]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer 14045ms (10:49:07.890)
	Trace[1920876370]: [14.045969474s] [14.045969474s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48246->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-919901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_37_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:53:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-919901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0604b91ac2ed4dfdb4f1eba3f89f2634
	  System UUID:                0604b91a-c2ed-4dfd-b4f1-eba3f89f2634
	  Boot ID:                    e69dd59d-8862-4943-a8be-e27de6624ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pj8gg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-rc7cl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-wstd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-919901                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-k5wz9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-919901             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-919901    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-ftvfl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-919901             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-919901                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m18s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Warning  ContainerGCFailed        5m31s (x2 over 6m31s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-919901 event: Registered Node ha-919901 in Controller
	  Normal   NodeNotReady             101s                   node-controller  Node ha-919901 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     74s (x2 over 16m)      kubelet          Node ha-919901 status is now: NodeHasSufficientPID
	  Normal   NodeReady                74s (x2 over 16m)      kubelet          Node ha-919901 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    74s (x2 over 16m)      kubelet          Node ha-919901 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  74s (x2 over 16m)      kubelet          Node ha-919901 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-919901-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:53:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:52:31 +0000   Mon, 12 Aug 2024 10:52:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-919901-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2d78288ee7d4cf8b54a7dd9f4bdd0a2
	  System UUID:                b2d78288-ee7d-4cf8-b54a-7dd9f4bdd0a2
	  Boot ID:                    d72cd250-7bd8-4d68-95c5-1f7c57ad2cfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-46rph                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-919901-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-8cqm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-919901-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-919901-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-cczfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-919901-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-919901-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-919901-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-919901-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-919901-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-919901-m02 event: Registered Node ha-919901-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-919901-m02 status is now: NodeNotReady
	
	
	Name:               ha-919901-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-919901-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=ha-919901
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T10_40_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:40:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-919901-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:51:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:51:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:51:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:51:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 10:50:57 +0000   Mon, 12 Aug 2024 10:51:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    ha-919901-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9924b3342904c65bcf17b38012b444a
	  System UUID:                d9924b33-4290-4c65-bcf1-7b38012b444a
	  Boot ID:                    30fa988a-7807-41ac-b291-dc75074e230b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-szw5b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-clr9b              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-2h4vt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-919901-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-919901-m04 event: Registered Node ha-919901-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-919901-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-919901-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-919901-m04 has been rebooted, boot id: 30fa988a-7807-41ac-b291-dc75074e230b
	  Normal   NodeReady                2m48s                  kubelet          Node ha-919901-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m31s)   node-controller  Node ha-919901-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.064986] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049228] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.190717] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.120674] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.278615] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Aug12 10:37] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +3.648433] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.060066] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.249848] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.088679] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.931862] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.868842] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 10:38] kauditd_printk_skb: 26 callbacks suppressed
	[Aug12 10:45] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 10:48] systemd-fstab-generator[3726]: Ignoring "noauto" option for root device
	[  +0.145695] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	[  +0.176311] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[  +0.152826] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
	[  +0.276685] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +7.905668] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[  +0.088150] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.352854] kauditd_printk_skb: 22 callbacks suppressed
	[ +11.859150] kauditd_printk_skb: 76 callbacks suppressed
	[Aug12 10:49] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.069363] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [0c30877cfdccaa151966735f7bbaba34a1f500f301ce489538d01d863c68ee14] <==
	{"level":"info","ts":"2024-08-12T10:46:55.81048Z","caller":"traceutil/trace.go:171","msg":"trace[2146744767] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"249.115462ms","start":"2024-08-12T10:46:55.561358Z","end":"2024-08-12T10:46:55.810474Z","steps":["trace[2146744767] 'agreement among raft nodes before linearized reading'  (duration: 247.725009ms)"],"step_count":1}
	2024/08/12 10:46:55 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T10:46:55.808963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.340324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T10:46:55.81051Z","caller":"traceutil/trace.go:171","msg":"trace[1448295790] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"257.939097ms","start":"2024-08-12T10:46:55.552567Z","end":"2024-08-12T10:46:55.810506Z","steps":["trace[1448295790] 'agreement among raft nodes before linearized reading'  (duration: 256.347003ms)"],"step_count":1}
	2024/08/12 10:46:55 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T10:46:55.880022Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T10:46:55.880118Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T10:46:55.880343Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c5263387c79c0223","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-12T10:46:55.880582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.880829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.880896Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881009Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.88112Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881154Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f8c824025eafd254"}
	{"level":"info","ts":"2024-08-12T10:46:55.881182Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881208Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.88131Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881423Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881469Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881519Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.881547Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:46:55.884544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-08-12T10:46:55.884704Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-08-12T10:46:55.884755Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-919901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [819e54d9554ed5489fc402272cb1ef7f5adb6f5d6e5b210f0649673078590180] <==
	{"level":"info","ts":"2024-08-12T10:50:13.256653Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.256904Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.258672Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.279051Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"adb6b1085391554e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-12T10:50:13.279109Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:13.290397Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"adb6b1085391554e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-12T10:50:13.290592Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:50:18.467764Z","caller":"traceutil/trace.go:171","msg":"trace[236088396] transaction","detail":"{read_only:false; response_revision:2358; number_of_response:1; }","duration":"162.322496ms","start":"2024-08-12T10:50:18.305398Z","end":"2024-08-12T10:50:18.467721Z","steps":["trace[236088396] 'process raft request'  (duration: 160.949792ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T10:51:11.624049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 switched to configuration voters=(14206098732849300003 17926617909345374804)"}
	{"level":"info","ts":"2024-08-12T10:51:11.626294Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","removed-remote-peer-id":"adb6b1085391554e","removed-remote-peer-urls":["https://192.168.39.195:2380"]}
	{"level":"info","ts":"2024-08-12T10:51:11.626445Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.627416Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:51:11.627492Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.627838Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:51:11.6279Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:51:11.627979Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.628197Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","error":"context canceled"}
	{"level":"warn","ts":"2024-08-12T10:51:11.628334Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"adb6b1085391554e","error":"failed to read adb6b1085391554e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-12T10:51:11.62841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.628694Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T10:51:11.628756Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:51:11.628818Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"adb6b1085391554e"}
	{"level":"info","ts":"2024-08-12T10:51:11.628892Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c5263387c79c0223","removed-remote-peer-id":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.64464Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c5263387c79c0223","remote-peer-id-stream-handler":"c5263387c79c0223","remote-peer-id-from":"adb6b1085391554e"}
	{"level":"warn","ts":"2024-08-12T10:51:11.652338Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.195:59678","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:53:45 up 17 min,  0 users,  load average: 0.52, 0.47, 0.36
	Linux ha-919901 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d3c2394cc8cd76cb420cb4132296fa326bf586e6079af20c991bbd39a02e6bf] <==
	I0812 10:46:23.961889       1 main.go:299] handling current node
	I0812 10:46:33.952159       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:33.952190       1 main.go:299] handling current node
	I0812 10:46:33.952207       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:33.952213       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:33.952422       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:33.952444       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:33.952504       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:33.952522       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:46:43.951870       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:43.951923       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:46:43.952161       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:43.952183       1 main.go:299] handling current node
	I0812 10:46:43.952195       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:43.952200       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:43.952305       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:43.952348       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:53.960285       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:46:53.960336       1 main.go:299] handling current node
	I0812 10:46:53.960352       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:46:53.960358       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:46:53.960540       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0812 10:46:53.960563       1 main.go:322] Node ha-919901-m03 has CIDR [10.244.2.0/24] 
	I0812 10:46:53.960648       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:46:53.960669       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bc6462a604f646f4d41247df33068d997dc236d79cc2786c0530f72f7574d1ee] <==
	I0812 10:53:03.197915       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:53:13.203554       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:53:13.203655       1 main.go:299] handling current node
	I0812 10:53:13.203686       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:53:13.203705       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:53:13.203873       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:53:13.203896       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:53:23.199006       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:53:23.199117       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:53:23.199331       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:53:23.199366       1 main.go:299] handling current node
	I0812 10:53:23.199398       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:53:23.199418       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:53:33.197529       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:53:33.197598       1 main.go:299] handling current node
	I0812 10:53:33.197614       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:53:33.197619       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:53:33.197749       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:53:33.197767       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	I0812 10:53:43.197687       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0812 10:53:43.197742       1 main.go:299] handling current node
	I0812 10:53:43.197757       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0812 10:53:43.197762       1 main.go:322] Node ha-919901-m02 has CIDR [10.244.1.0/24] 
	I0812 10:53:43.197942       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0812 10:53:43.197962       1 main.go:322] Node ha-919901-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [40a98a9a1e93623da9b24a95b7598aefeec227db5303dd2ec1bfad11b70d58bc] <==
	I0812 10:48:42.118944       1 options.go:221] external host was not specified, using 192.168.39.5
	I0812 10:48:42.122553       1 server.go:148] Version: v1.30.3
	I0812 10:48:42.122597       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:48:42.792922       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 10:48:42.795890       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 10:48:42.796073       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 10:48:42.796315       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 10:48:42.796395       1 instance.go:299] Using reconciler: lease
	W0812 10:49:02.784466       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 10:49:02.785764       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0812 10:49:02.797740       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9e1fc5e3909238498769b1e7c49de8d11bd947aa9683e202c5cf20d8b125b790] <==
	I0812 10:49:27.293420       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0812 10:49:27.329657       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 10:49:27.329689       1 policy_source.go:224] refreshing policies
	I0812 10:49:27.354372       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 10:49:27.363163       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 10:49:27.373986       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 10:49:27.375483       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 10:49:27.378387       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 10:49:27.378417       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 10:49:27.378634       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 10:49:27.376109       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 10:49:27.385896       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0812 10:49:27.391131       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.195]
	I0812 10:49:27.392637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 10:49:27.394450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 10:49:27.394480       1 aggregator.go:165] initial CRD sync complete...
	I0812 10:49:27.394500       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 10:49:27.394509       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 10:49:27.394514       1 cache.go:39] Caches are synced for autoregister controller
	I0812 10:49:27.399064       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0812 10:49:27.407531       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0812 10:49:28.283100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 10:49:28.754530       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.195 192.168.39.5]
	W0812 10:49:38.727178       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.5]
	W0812 10:51:28.733867       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.5]
	
	
	==> kube-controller-manager [1f2b335c58f4e58dc1016841c6013788ca91f08610028f1b3191acb56e98aa93] <==
	I0812 10:48:43.680200       1 serving.go:380] Generated self-signed cert in-memory
	I0812 10:48:44.001889       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 10:48:44.001932       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:48:44.003478       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 10:48:44.003638       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0812 10:48:44.003813       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 10:48:44.004018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0812 10:49:04.006623       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.5:8443/healthz\": dial tcp 192.168.39.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [75c65bbec166c0e1e29b1dc74149f68f9ae8fc6eb749087afa70771e501ea1ea] <==
	I0812 10:51:59.873892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.575µs"
	I0812 10:52:04.926511       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0812 10:52:04.932399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.018953ms"
	I0812 10:52:04.935202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.405µs"
	I0812 10:52:04.947915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.011017ms"
	I0812 10:52:04.949389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="173.59µs"
	I0812 10:52:05.120654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.692846ms"
	I0812 10:52:05.121466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.321µs"
	I0812 10:52:05.248473       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qqnkt\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 10:52:05.249337       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7975b33c-8206-449f-a51d-014bbab1aaa2", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qqnkt": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:52:05.285806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.683489ms"
	I0812 10:52:05.285930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.538µs"
	I0812 10:52:24.622309       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qqnkt\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 10:52:24.622838       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7975b33c-8206-449f-a51d-014bbab1aaa2", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qqnkt": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:52:24.624685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.449573ms"
	I0812 10:52:24.625524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="185.497µs"
	I0812 10:52:24.692368       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qqnkt\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 10:52:24.692843       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7975b33c-8206-449f-a51d-014bbab1aaa2", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qqnkt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qqnkt": the object has been modified; please apply your changes to the latest version and try again
	I0812 10:52:24.702345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.820218ms"
	I0812 10:52:24.702484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.137µs"
	I0812 10:52:24.781290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.734921ms"
	I0812 10:52:24.781449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.795µs"
	I0812 10:52:28.830032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.130028ms"
	I0812 10:52:28.830202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.736µs"
	I0812 10:52:34.961947       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7cd3e13fb2b3b4d48e2306ffd36c6241a47faafe75f1b528e3a316c9bec0705f] <==
	E0812 10:45:51.925786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:54.996506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:54.996640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:54.996762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:54.996832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:45:58.068075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:45:58.068140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:01.140034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:01.140292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:01.140437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:01.140839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:04.212743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:04.212938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:10.355322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:10.355524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:13.428463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:13.428526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:13.428569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:13.428595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:25.714928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:25.715036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:31.859423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:31.860293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:46:41.075203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:46:41.075392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [d6976ec7a56e859fcd49a53e0a3c9b9e23fa6ab1283c344676b64a27bc30f3ff] <==
	E0812 10:49:08.531098       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 10:49:26.994583       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 10:49:26.994681       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0812 10:49:27.041408       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:49:27.041480       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:49:27.041497       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:49:27.099601       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:49:27.099908       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:49:27.099935       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:49:27.107750       1 config.go:192] "Starting service config controller"
	I0812 10:49:27.107896       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:49:27.107953       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:49:27.107971       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:49:27.108951       1 config.go:319] "Starting node config controller"
	I0812 10:49:27.109031       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0812 10:49:30.035302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.035751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-919901&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:49:30.036731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.036811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 10:49:30.036911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.036956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 10:49:30.037056       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 10:49:31.009157       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:49:31.312550       1 shared_informer.go:320] Caches are synced for node config
	I0812 10:49:31.508852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2af78571207ce34c193f10ac91fe17888677e013439ec5e2bf1781b8f46309bf] <==
	W0812 10:46:51.980092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 10:46:51.980135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 10:46:51.993536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 10:46:51.993588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 10:46:52.209057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 10:46:52.209097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 10:46:52.399930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:52.400066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:52.581837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 10:46:52.581885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 10:46:52.688037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 10:46:52.688110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 10:46:52.691566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 10:46:52.691718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 10:46:53.407780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:53.407863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:53.480788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:53.480867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 10:46:53.544618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 10:46:53.544748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 10:46:54.319664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 10:46:54.319709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 10:46:55.256623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:55.256677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 10:46:55.763703       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fc2643d16d41c975a1af1ee9129789ff983df1bb4c8e03c11fbda01cd3f898d4] <==
	W0812 10:49:19.536633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:19.536791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:19.568709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:19.569553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:20.229284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:20.229362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:21.181283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:21.181430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:21.647788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:21.647837       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:22.486092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:22.486136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:22.661028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:22.661172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.026498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.026558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.319319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.319412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.348308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.348404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:23.578454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:23.578504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0812 10:49:24.329898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0812 10:49:24.329949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	I0812 10:49:32.808609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 10:52:12 ha-919901 kubelet[1369]: E0812 10:52:12.047435    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 12 10:52:13 ha-919901 kubelet[1369]: E0812 10:52:13.217007    1369 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-919901?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 12 10:52:14 ha-919901 kubelet[1369]: E0812 10:52:14.519832    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:52:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:52:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:52:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:52:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263498    1369 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: E0812 10:52:21.263680    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-919901\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-919901?timeout=10s\": http2: client connection lost"
	Aug 12 10:52:21 ha-919901 kubelet[1369]: E0812 10:52:21.263700    1369 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263729    1369 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263752    1369 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263773    1369 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263805    1369 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263498    1369 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263537    1369 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.263846    1369 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:52:21 ha-919901 kubelet[1369]: E0812 10:52:21.263927    1369 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-919901?timeout=10s\": http2: client connection lost"
	Aug 12 10:52:21 ha-919901 kubelet[1369]: I0812 10:52:21.263961    1369 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 12 10:52:21 ha-919901 kubelet[1369]: W0812 10:52:21.264115    1369 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 10:53:14 ha-919901 kubelet[1369]: E0812 10:53:14.515713    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:53:14 ha-919901 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:53:14 ha-919901 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:53:14 ha-919901 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:53:14 ha-919901 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 10:53:44.831707   30882 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19409-3774/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-919901 -n ha-919901
helpers_test.go:261: (dbg) Run:  kubectl --context ha-919901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053297
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-053297
E0812 11:10:45.938661   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-053297: exit status 82 (2m1.766797483s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053297-m03"  ...
	* Stopping node "multinode-053297-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-053297" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053297 --wait=true -v=8 --alsologtostderr
E0812 11:13:30.975727   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 11:13:48.982078   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053297 --wait=true -v=8 --alsologtostderr: (3m27.806489407s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053297
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-053297 -n multinode-053297
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-053297 logs -n 25: (1.554326378s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297:/home/docker/cp-test_multinode-053297-m02_multinode-053297.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297 sudo cat                                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m02_multinode-053297.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03:/home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297-m03 sudo cat                                   | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp testdata/cp-test.txt                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297:/home/docker/cp-test_multinode-053297-m03_multinode-053297.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297 sudo cat                                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m03_multinode-053297.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02:/home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297-m02 sudo cat                                   | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-053297 node stop m03                                                          | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	| node    | multinode-053297 node start                                                             | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-053297                                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC |                     |
	| stop    | -p multinode-053297                                                                     | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC |                     |
	| start   | -p multinode-053297                                                                     | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:10 UTC | 12 Aug 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-053297                                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:10:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:10:52.899612   40267 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:10:52.899925   40267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:10:52.899936   40267 out.go:304] Setting ErrFile to fd 2...
	I0812 11:10:52.899942   40267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:10:52.900173   40267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:10:52.900789   40267 out.go:298] Setting JSON to false
	I0812 11:10:52.901764   40267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3194,"bootTime":1723457859,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:10:52.901832   40267 start.go:139] virtualization: kvm guest
	I0812 11:10:52.904464   40267 out.go:177] * [multinode-053297] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:10:52.905986   40267 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:10:52.906025   40267 notify.go:220] Checking for updates...
	I0812 11:10:52.908947   40267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:10:52.910680   40267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:10:52.912390   40267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:10:52.913974   40267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:10:52.915358   40267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:10:52.917150   40267 config.go:182] Loaded profile config "multinode-053297": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:10:52.917290   40267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:10:52.917761   40267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:10:52.917832   40267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:10:52.933987   40267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0812 11:10:52.934435   40267 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:10:52.935055   40267 main.go:141] libmachine: Using API Version  1
	I0812 11:10:52.935074   40267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:10:52.935470   40267 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:10:52.935686   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:52.973076   40267 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:10:52.974377   40267 start.go:297] selected driver: kvm2
	I0812 11:10:52.974394   40267 start.go:901] validating driver "kvm2" against &{Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:10:52.974535   40267 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:10:52.974870   40267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:10:52.974932   40267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:10:52.990429   40267 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:10:52.991365   40267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:10:52.991447   40267 cni.go:84] Creating CNI manager for ""
	I0812 11:10:52.991462   40267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 11:10:52.991542   40267 start.go:340] cluster config:
	{Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:10:52.991705   40267 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:10:52.993654   40267 out.go:177] * Starting "multinode-053297" primary control-plane node in "multinode-053297" cluster
	I0812 11:10:52.994931   40267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:10:52.994984   40267 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:10:52.994994   40267 cache.go:56] Caching tarball of preloaded images
	I0812 11:10:52.995095   40267 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:10:52.995107   40267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:10:52.995237   40267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/config.json ...
	I0812 11:10:52.995449   40267 start.go:360] acquireMachinesLock for multinode-053297: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:10:52.995491   40267 start.go:364] duration metric: took 23.538µs to acquireMachinesLock for "multinode-053297"
	I0812 11:10:52.995505   40267 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:10:52.995511   40267 fix.go:54] fixHost starting: 
	I0812 11:10:52.995764   40267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:10:52.995805   40267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:10:53.010740   40267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0812 11:10:53.011200   40267 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:10:53.011697   40267 main.go:141] libmachine: Using API Version  1
	I0812 11:10:53.011721   40267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:10:53.012001   40267 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:10:53.012177   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:53.012347   40267 main.go:141] libmachine: (multinode-053297) Calling .GetState
	I0812 11:10:53.013953   40267 fix.go:112] recreateIfNeeded on multinode-053297: state=Running err=<nil>
	W0812 11:10:53.013969   40267 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:10:53.015835   40267 out.go:177] * Updating the running kvm2 "multinode-053297" VM ...
	I0812 11:10:53.017182   40267 machine.go:94] provisionDockerMachine start ...
	I0812 11:10:53.017207   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:53.017425   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.020642   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.021332   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.021361   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.021520   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.021690   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.021867   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.021979   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.022121   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.022363   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.022376   40267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:10:53.138165   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-053297
	
	I0812 11:10:53.138209   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.138469   40267 buildroot.go:166] provisioning hostname "multinode-053297"
	I0812 11:10:53.138489   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.138700   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.141550   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.142051   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.142082   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.142200   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.142385   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.142558   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.142706   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.142883   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.143041   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.143053   40267 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-053297 && echo "multinode-053297" | sudo tee /etc/hostname
	I0812 11:10:53.268189   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-053297
	
	I0812 11:10:53.268215   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.270867   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.271239   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.271281   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.271430   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.271611   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.271761   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.271923   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.272043   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.272218   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.272236   40267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-053297' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-053297/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-053297' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:10:53.385740   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:10:53.385777   40267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:10:53.385832   40267 buildroot.go:174] setting up certificates
	I0812 11:10:53.385911   40267 provision.go:84] configureAuth start
	I0812 11:10:53.385958   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.386276   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:10:53.389383   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.389859   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.389887   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.390078   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.392513   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.392856   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.392900   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.393040   40267 provision.go:143] copyHostCerts
	I0812 11:10:53.393070   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:10:53.393113   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:10:53.393122   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:10:53.393205   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:10:53.393318   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:10:53.393345   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:10:53.393355   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:10:53.393395   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:10:53.393471   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:10:53.393495   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:10:53.393504   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:10:53.393538   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:10:53.393638   40267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.multinode-053297 san=[127.0.0.1 192.168.39.95 localhost minikube multinode-053297]
	I0812 11:10:53.452627   40267 provision.go:177] copyRemoteCerts
	I0812 11:10:53.452679   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:10:53.452703   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.455651   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.455975   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.456024   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.456233   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.456463   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.456633   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.456772   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:10:53.543879   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 11:10:53.543956   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:10:53.569869   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 11:10:53.569962   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0812 11:10:53.596610   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 11:10:53.596681   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 11:10:53.624262   40267 provision.go:87] duration metric: took 238.314605ms to configureAuth
	I0812 11:10:53.624293   40267 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:10:53.624531   40267 config.go:182] Loaded profile config "multinode-053297": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:10:53.624642   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.627272   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.627668   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.627698   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.627924   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.628131   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.628285   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.628448   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.628669   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.628848   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.628887   40267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:12:24.514340   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:12:24.514376   40267 machine.go:97] duration metric: took 1m31.497175287s to provisionDockerMachine
	I0812 11:12:24.514395   40267 start.go:293] postStartSetup for "multinode-053297" (driver="kvm2")
	I0812 11:12:24.514410   40267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:12:24.514432   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.514813   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:12:24.514839   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.518090   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.518486   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.518507   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.518690   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.518907   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.519111   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.519273   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.608663   40267 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:12:24.612702   40267 command_runner.go:130] > NAME=Buildroot
	I0812 11:12:24.612721   40267 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0812 11:12:24.612726   40267 command_runner.go:130] > ID=buildroot
	I0812 11:12:24.612731   40267 command_runner.go:130] > VERSION_ID=2023.02.9
	I0812 11:12:24.612742   40267 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0812 11:12:24.612770   40267 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:12:24.612785   40267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:12:24.612856   40267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:12:24.612965   40267 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:12:24.612974   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 11:12:24.613072   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:12:24.622611   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:12:24.649506   40267 start.go:296] duration metric: took 135.095455ms for postStartSetup
	I0812 11:12:24.649555   40267 fix.go:56] duration metric: took 1m31.654043513s for fixHost
	I0812 11:12:24.649575   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.652194   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.652554   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.652586   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.652681   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.652923   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.653079   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.653232   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.653413   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:12:24.653604   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:12:24.653615   40267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:12:24.777267   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723461144.752439142
	
	I0812 11:12:24.777296   40267 fix.go:216] guest clock: 1723461144.752439142
	I0812 11:12:24.777307   40267 fix.go:229] Guest: 2024-08-12 11:12:24.752439142 +0000 UTC Remote: 2024-08-12 11:12:24.649559793 +0000 UTC m=+91.786675546 (delta=102.879349ms)
	I0812 11:12:24.777341   40267 fix.go:200] guest clock delta is within tolerance: 102.879349ms
	I0812 11:12:24.777352   40267 start.go:83] releasing machines lock for "multinode-053297", held for 1m31.781851105s
	I0812 11:12:24.777391   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.777678   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:12:24.780377   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.780756   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.780811   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.780906   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781370   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781634   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781764   40267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:12:24.781801   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.781875   40267 ssh_runner.go:195] Run: cat /version.json
	I0812 11:12:24.781899   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.784501   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.784906   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.784946   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.784971   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.785120   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.785310   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.785486   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.785495   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.785511   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.785692   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.785700   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.785859   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.786010   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.786150   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.905838   40267 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0812 11:12:24.906575   40267 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0812 11:12:24.906760   40267 ssh_runner.go:195] Run: systemctl --version
	I0812 11:12:24.913393   40267 command_runner.go:130] > systemd 252 (252)
	I0812 11:12:24.913450   40267 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0812 11:12:24.913516   40267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:12:25.073010   40267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0812 11:12:25.079558   40267 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0812 11:12:25.079922   40267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:12:25.079998   40267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:12:25.089520   40267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 11:12:25.089560   40267 start.go:495] detecting cgroup driver to use...
	I0812 11:12:25.089633   40267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:12:25.105924   40267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:12:25.120369   40267 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:12:25.120422   40267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:12:25.134261   40267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:12:25.148211   40267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:12:25.295586   40267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:12:25.438381   40267 docker.go:233] disabling docker service ...
	I0812 11:12:25.438447   40267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:12:25.456167   40267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:12:25.470969   40267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:12:25.616134   40267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:12:25.773426   40267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:12:25.786869   40267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:12:25.806169   40267 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0812 11:12:25.806485   40267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:12:25.806547   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.817087   40267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:12:25.817178   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.827781   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.837924   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.848766   40267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:12:25.859737   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.870508   40267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.881028   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.891120   40267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:12:25.900288   40267 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0812 11:12:25.900497   40267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:12:25.909652   40267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:12:26.052003   40267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:12:34.161606   40267 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.109562649s)
	I0812 11:12:34.161642   40267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:12:34.161702   40267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:12:34.166323   40267 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0812 11:12:34.166354   40267 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0812 11:12:34.166374   40267 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0812 11:12:34.166381   40267 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 11:12:34.166386   40267 command_runner.go:130] > Access: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166403   40267 command_runner.go:130] > Modify: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166413   40267 command_runner.go:130] > Change: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166418   40267 command_runner.go:130] >  Birth: -
	I0812 11:12:34.166551   40267 start.go:563] Will wait 60s for crictl version
	I0812 11:12:34.166631   40267 ssh_runner.go:195] Run: which crictl
	I0812 11:12:34.170610   40267 command_runner.go:130] > /usr/bin/crictl
	I0812 11:12:34.170685   40267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:12:34.205615   40267 command_runner.go:130] > Version:  0.1.0
	I0812 11:12:34.205650   40267 command_runner.go:130] > RuntimeName:  cri-o
	I0812 11:12:34.205658   40267 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0812 11:12:34.205665   40267 command_runner.go:130] > RuntimeApiVersion:  v1
	I0812 11:12:34.205689   40267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:12:34.205771   40267 ssh_runner.go:195] Run: crio --version
	I0812 11:12:34.234794   40267 command_runner.go:130] > crio version 1.29.1
	I0812 11:12:34.234823   40267 command_runner.go:130] > Version:        1.29.1
	I0812 11:12:34.234830   40267 command_runner.go:130] > GitCommit:      unknown
	I0812 11:12:34.234835   40267 command_runner.go:130] > GitCommitDate:  unknown
	I0812 11:12:34.234856   40267 command_runner.go:130] > GitTreeState:   clean
	I0812 11:12:34.234864   40267 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 11:12:34.234869   40267 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 11:12:34.234875   40267 command_runner.go:130] > Compiler:       gc
	I0812 11:12:34.234881   40267 command_runner.go:130] > Platform:       linux/amd64
	I0812 11:12:34.234887   40267 command_runner.go:130] > Linkmode:       dynamic
	I0812 11:12:34.234893   40267 command_runner.go:130] > BuildTags:      
	I0812 11:12:34.234900   40267 command_runner.go:130] >   containers_image_ostree_stub
	I0812 11:12:34.234907   40267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 11:12:34.234917   40267 command_runner.go:130] >   btrfs_noversion
	I0812 11:12:34.234924   40267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 11:12:34.234931   40267 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 11:12:34.234937   40267 command_runner.go:130] >   seccomp
	I0812 11:12:34.234945   40267 command_runner.go:130] > LDFlags:          unknown
	I0812 11:12:34.234952   40267 command_runner.go:130] > SeccompEnabled:   true
	I0812 11:12:34.234959   40267 command_runner.go:130] > AppArmorEnabled:  false
	I0812 11:12:34.235041   40267 ssh_runner.go:195] Run: crio --version
	I0812 11:12:34.262741   40267 command_runner.go:130] > crio version 1.29.1
	I0812 11:12:34.262767   40267 command_runner.go:130] > Version:        1.29.1
	I0812 11:12:34.262775   40267 command_runner.go:130] > GitCommit:      unknown
	I0812 11:12:34.262781   40267 command_runner.go:130] > GitCommitDate:  unknown
	I0812 11:12:34.262787   40267 command_runner.go:130] > GitTreeState:   clean
	I0812 11:12:34.262794   40267 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 11:12:34.262799   40267 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 11:12:34.262805   40267 command_runner.go:130] > Compiler:       gc
	I0812 11:12:34.262812   40267 command_runner.go:130] > Platform:       linux/amd64
	I0812 11:12:34.262818   40267 command_runner.go:130] > Linkmode:       dynamic
	I0812 11:12:34.262824   40267 command_runner.go:130] > BuildTags:      
	I0812 11:12:34.262831   40267 command_runner.go:130] >   containers_image_ostree_stub
	I0812 11:12:34.262839   40267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 11:12:34.262853   40267 command_runner.go:130] >   btrfs_noversion
	I0812 11:12:34.262860   40267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 11:12:34.262871   40267 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 11:12:34.262877   40267 command_runner.go:130] >   seccomp
	I0812 11:12:34.262885   40267 command_runner.go:130] > LDFlags:          unknown
	I0812 11:12:34.262894   40267 command_runner.go:130] > SeccompEnabled:   true
	I0812 11:12:34.262901   40267 command_runner.go:130] > AppArmorEnabled:  false
	I0812 11:12:34.266184   40267 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:12:34.267395   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:12:34.270166   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:34.270496   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:34.270527   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:34.270733   40267 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 11:12:34.274852   40267 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0812 11:12:34.274974   40267 kubeadm.go:883] updating cluster {Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:12:34.275127   40267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:12:34.275180   40267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:12:34.318681   40267 command_runner.go:130] > {
	I0812 11:12:34.318704   40267 command_runner.go:130] >   "images": [
	I0812 11:12:34.318708   40267 command_runner.go:130] >     {
	I0812 11:12:34.318715   40267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 11:12:34.318720   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318725   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 11:12:34.318729   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318733   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318743   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 11:12:34.318750   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 11:12:34.318753   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318766   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.318771   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318774   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318782   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318789   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318793   40267 command_runner.go:130] >     },
	I0812 11:12:34.318796   40267 command_runner.go:130] >     {
	I0812 11:12:34.318801   40267 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 11:12:34.318805   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318810   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 11:12:34.318814   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318818   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318825   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 11:12:34.318833   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 11:12:34.318836   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318840   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.318844   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318852   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318858   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318862   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318865   40267 command_runner.go:130] >     },
	I0812 11:12:34.318868   40267 command_runner.go:130] >     {
	I0812 11:12:34.318873   40267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 11:12:34.318877   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318882   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 11:12:34.318887   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318891   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318898   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 11:12:34.318905   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 11:12:34.318909   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318913   40267 command_runner.go:130] >       "size": "1363676",
	I0812 11:12:34.318919   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318923   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318926   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318931   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318937   40267 command_runner.go:130] >     },
	I0812 11:12:34.318945   40267 command_runner.go:130] >     {
	I0812 11:12:34.318951   40267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 11:12:34.318957   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318962   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 11:12:34.318968   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318972   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318979   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 11:12:34.318995   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 11:12:34.319000   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319005   40267 command_runner.go:130] >       "size": "31470524",
	I0812 11:12:34.319009   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319012   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319016   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319020   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319023   40267 command_runner.go:130] >     },
	I0812 11:12:34.319026   40267 command_runner.go:130] >     {
	I0812 11:12:34.319032   40267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 11:12:34.319038   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319043   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 11:12:34.319048   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319052   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319059   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 11:12:34.319068   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 11:12:34.319071   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319075   40267 command_runner.go:130] >       "size": "61245718",
	I0812 11:12:34.319079   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319083   40267 command_runner.go:130] >       "username": "nonroot",
	I0812 11:12:34.319087   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319090   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319094   40267 command_runner.go:130] >     },
	I0812 11:12:34.319097   40267 command_runner.go:130] >     {
	I0812 11:12:34.319105   40267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 11:12:34.319109   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319114   40267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 11:12:34.319120   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319123   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319134   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 11:12:34.319143   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 11:12:34.319147   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319150   40267 command_runner.go:130] >       "size": "150779692",
	I0812 11:12:34.319154   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319158   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319161   40267 command_runner.go:130] >       },
	I0812 11:12:34.319165   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319169   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319172   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319176   40267 command_runner.go:130] >     },
	I0812 11:12:34.319179   40267 command_runner.go:130] >     {
	I0812 11:12:34.319185   40267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 11:12:34.319189   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319194   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 11:12:34.319199   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319203   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319210   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 11:12:34.319219   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 11:12:34.319222   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319226   40267 command_runner.go:130] >       "size": "117609954",
	I0812 11:12:34.319230   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319234   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319237   40267 command_runner.go:130] >       },
	I0812 11:12:34.319241   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319244   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319250   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319253   40267 command_runner.go:130] >     },
	I0812 11:12:34.319256   40267 command_runner.go:130] >     {
	I0812 11:12:34.319262   40267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 11:12:34.319267   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319272   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 11:12:34.319275   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319279   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319299   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 11:12:34.319309   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 11:12:34.319320   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319347   40267 command_runner.go:130] >       "size": "112198984",
	I0812 11:12:34.319354   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319357   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319361   40267 command_runner.go:130] >       },
	I0812 11:12:34.319364   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319368   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319371   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319374   40267 command_runner.go:130] >     },
	I0812 11:12:34.319377   40267 command_runner.go:130] >     {
	I0812 11:12:34.319382   40267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 11:12:34.319386   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319391   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 11:12:34.319394   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319397   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319404   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 11:12:34.319410   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 11:12:34.319413   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319420   40267 command_runner.go:130] >       "size": "85953945",
	I0812 11:12:34.319424   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319428   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319431   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319435   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319438   40267 command_runner.go:130] >     },
	I0812 11:12:34.319442   40267 command_runner.go:130] >     {
	I0812 11:12:34.319448   40267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 11:12:34.319454   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319458   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 11:12:34.319464   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319468   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319475   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 11:12:34.319484   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 11:12:34.319487   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319492   40267 command_runner.go:130] >       "size": "63051080",
	I0812 11:12:34.319498   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319509   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319519   40267 command_runner.go:130] >       },
	I0812 11:12:34.319523   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319527   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319531   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319534   40267 command_runner.go:130] >     },
	I0812 11:12:34.319537   40267 command_runner.go:130] >     {
	I0812 11:12:34.319543   40267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 11:12:34.319549   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319554   40267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 11:12:34.319559   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319562   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319580   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 11:12:34.319586   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 11:12:34.319591   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319595   40267 command_runner.go:130] >       "size": "750414",
	I0812 11:12:34.319599   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319603   40267 command_runner.go:130] >         "value": "65535"
	I0812 11:12:34.319606   40267 command_runner.go:130] >       },
	I0812 11:12:34.319610   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319614   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319618   40267 command_runner.go:130] >       "pinned": true
	I0812 11:12:34.319620   40267 command_runner.go:130] >     }
	I0812 11:12:34.319623   40267 command_runner.go:130] >   ]
	I0812 11:12:34.319626   40267 command_runner.go:130] > }
	I0812 11:12:34.320595   40267 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:12:34.320618   40267 crio.go:433] Images already preloaded, skipping extraction
	I0812 11:12:34.320686   40267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:12:34.353463   40267 command_runner.go:130] > {
	I0812 11:12:34.353483   40267 command_runner.go:130] >   "images": [
	I0812 11:12:34.353486   40267 command_runner.go:130] >     {
	I0812 11:12:34.353495   40267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 11:12:34.353503   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353513   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 11:12:34.353518   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353524   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353535   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 11:12:34.353545   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 11:12:34.353555   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353562   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.353579   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353587   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353594   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353602   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353608   40267 command_runner.go:130] >     },
	I0812 11:12:34.353617   40267 command_runner.go:130] >     {
	I0812 11:12:34.353627   40267 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 11:12:34.353636   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353645   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 11:12:34.353652   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353657   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353669   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 11:12:34.353684   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 11:12:34.353692   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353698   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.353706   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353718   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353728   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353737   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353746   40267 command_runner.go:130] >     },
	I0812 11:12:34.353752   40267 command_runner.go:130] >     {
	I0812 11:12:34.353764   40267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 11:12:34.353773   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353781   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 11:12:34.353789   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353796   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353809   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 11:12:34.353820   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 11:12:34.353827   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353831   40267 command_runner.go:130] >       "size": "1363676",
	I0812 11:12:34.353837   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353841   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353845   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353852   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353855   40267 command_runner.go:130] >     },
	I0812 11:12:34.353860   40267 command_runner.go:130] >     {
	I0812 11:12:34.353871   40267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 11:12:34.353878   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353883   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 11:12:34.353887   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353891   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353899   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 11:12:34.353915   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 11:12:34.353921   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353925   40267 command_runner.go:130] >       "size": "31470524",
	I0812 11:12:34.353929   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353933   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353937   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353941   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353944   40267 command_runner.go:130] >     },
	I0812 11:12:34.353948   40267 command_runner.go:130] >     {
	I0812 11:12:34.353953   40267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 11:12:34.353959   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353964   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 11:12:34.353968   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353971   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353978   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 11:12:34.353987   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 11:12:34.353996   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354002   40267 command_runner.go:130] >       "size": "61245718",
	I0812 11:12:34.354006   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.354010   40267 command_runner.go:130] >       "username": "nonroot",
	I0812 11:12:34.354014   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354018   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354021   40267 command_runner.go:130] >     },
	I0812 11:12:34.354024   40267 command_runner.go:130] >     {
	I0812 11:12:34.354032   40267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 11:12:34.354036   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354041   40267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 11:12:34.354046   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354050   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354057   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 11:12:34.354070   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 11:12:34.354077   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354081   40267 command_runner.go:130] >       "size": "150779692",
	I0812 11:12:34.354086   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354090   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354094   40267 command_runner.go:130] >       },
	I0812 11:12:34.354101   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354107   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354111   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354115   40267 command_runner.go:130] >     },
	I0812 11:12:34.354118   40267 command_runner.go:130] >     {
	I0812 11:12:34.354124   40267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 11:12:34.354129   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354134   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 11:12:34.354139   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354143   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354152   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 11:12:34.354162   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 11:12:34.354165   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354169   40267 command_runner.go:130] >       "size": "117609954",
	I0812 11:12:34.354175   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354179   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354182   40267 command_runner.go:130] >       },
	I0812 11:12:34.354191   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354197   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354201   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354205   40267 command_runner.go:130] >     },
	I0812 11:12:34.354208   40267 command_runner.go:130] >     {
	I0812 11:12:34.354214   40267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 11:12:34.354218   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354223   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 11:12:34.354229   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354233   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354253   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 11:12:34.354264   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 11:12:34.354267   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354275   40267 command_runner.go:130] >       "size": "112198984",
	I0812 11:12:34.354279   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354283   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354287   40267 command_runner.go:130] >       },
	I0812 11:12:34.354290   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354294   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354298   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354302   40267 command_runner.go:130] >     },
	I0812 11:12:34.354305   40267 command_runner.go:130] >     {
	I0812 11:12:34.354313   40267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 11:12:34.354317   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354323   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 11:12:34.354327   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354333   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354339   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 11:12:34.354360   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 11:12:34.354365   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354369   40267 command_runner.go:130] >       "size": "85953945",
	I0812 11:12:34.354373   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.354377   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354383   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354388   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354393   40267 command_runner.go:130] >     },
	I0812 11:12:34.354396   40267 command_runner.go:130] >     {
	I0812 11:12:34.354402   40267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 11:12:34.354409   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354413   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 11:12:34.354419   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354422   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354429   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 11:12:34.354438   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 11:12:34.354442   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354445   40267 command_runner.go:130] >       "size": "63051080",
	I0812 11:12:34.354449   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354453   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354456   40267 command_runner.go:130] >       },
	I0812 11:12:34.354464   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354470   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354474   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354478   40267 command_runner.go:130] >     },
	I0812 11:12:34.354481   40267 command_runner.go:130] >     {
	I0812 11:12:34.354487   40267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 11:12:34.354493   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354498   40267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 11:12:34.354503   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354507   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354513   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 11:12:34.354522   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 11:12:34.354526   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354530   40267 command_runner.go:130] >       "size": "750414",
	I0812 11:12:34.354534   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354538   40267 command_runner.go:130] >         "value": "65535"
	I0812 11:12:34.354541   40267 command_runner.go:130] >       },
	I0812 11:12:34.354545   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354549   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354553   40267 command_runner.go:130] >       "pinned": true
	I0812 11:12:34.354558   40267 command_runner.go:130] >     }
	I0812 11:12:34.354561   40267 command_runner.go:130] >   ]
	I0812 11:12:34.354564   40267 command_runner.go:130] > }
	I0812 11:12:34.354703   40267 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:12:34.354719   40267 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:12:34.354728   40267 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.3 crio true true} ...
	I0812 11:12:34.354853   40267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-053297 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:12:34.354925   40267 ssh_runner.go:195] Run: crio config
	I0812 11:12:34.386635   40267 command_runner.go:130] ! time="2024-08-12 11:12:34.361348603Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0812 11:12:34.393060   40267 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0812 11:12:34.398422   40267 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0812 11:12:34.398446   40267 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0812 11:12:34.398455   40267 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0812 11:12:34.398459   40267 command_runner.go:130] > #
	I0812 11:12:34.398468   40267 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0812 11:12:34.398478   40267 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0812 11:12:34.398487   40267 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0812 11:12:34.398502   40267 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0812 11:12:34.398511   40267 command_runner.go:130] > # reload'.
	I0812 11:12:34.398522   40267 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0812 11:12:34.398534   40267 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0812 11:12:34.398557   40267 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0812 11:12:34.398571   40267 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0812 11:12:34.398579   40267 command_runner.go:130] > [crio]
	I0812 11:12:34.398590   40267 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0812 11:12:34.398600   40267 command_runner.go:130] > # containers images, in this directory.
	I0812 11:12:34.398607   40267 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0812 11:12:34.398625   40267 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0812 11:12:34.398636   40267 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0812 11:12:34.398650   40267 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0812 11:12:34.398660   40267 command_runner.go:130] > # imagestore = ""
	I0812 11:12:34.398669   40267 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0812 11:12:34.398684   40267 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0812 11:12:34.398694   40267 command_runner.go:130] > storage_driver = "overlay"
	I0812 11:12:34.398702   40267 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0812 11:12:34.398711   40267 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0812 11:12:34.398721   40267 command_runner.go:130] > storage_option = [
	I0812 11:12:34.398729   40267 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0812 11:12:34.398737   40267 command_runner.go:130] > ]
	I0812 11:12:34.398748   40267 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0812 11:12:34.398761   40267 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0812 11:12:34.398771   40267 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0812 11:12:34.398784   40267 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0812 11:12:34.398795   40267 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0812 11:12:34.398805   40267 command_runner.go:130] > # always happen on a node reboot
	I0812 11:12:34.398815   40267 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0812 11:12:34.398835   40267 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0812 11:12:34.398848   40267 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0812 11:12:34.398859   40267 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0812 11:12:34.398870   40267 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0812 11:12:34.398882   40267 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0812 11:12:34.398898   40267 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0812 11:12:34.398907   40267 command_runner.go:130] > # internal_wipe = true
	I0812 11:12:34.398921   40267 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0812 11:12:34.398933   40267 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0812 11:12:34.398943   40267 command_runner.go:130] > # internal_repair = false
	I0812 11:12:34.398954   40267 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0812 11:12:34.398971   40267 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0812 11:12:34.398983   40267 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0812 11:12:34.398996   40267 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0812 11:12:34.399009   40267 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0812 11:12:34.399017   40267 command_runner.go:130] > [crio.api]
	I0812 11:12:34.399026   40267 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0812 11:12:34.399036   40267 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0812 11:12:34.399045   40267 command_runner.go:130] > # IP address on which the stream server will listen.
	I0812 11:12:34.399056   40267 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0812 11:12:34.399069   40267 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0812 11:12:34.399084   40267 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0812 11:12:34.399094   40267 command_runner.go:130] > # stream_port = "0"
	I0812 11:12:34.399104   40267 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0812 11:12:34.399114   40267 command_runner.go:130] > # stream_enable_tls = false
	I0812 11:12:34.399127   40267 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0812 11:12:34.399137   40267 command_runner.go:130] > # stream_idle_timeout = ""
	I0812 11:12:34.399148   40267 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0812 11:12:34.399160   40267 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0812 11:12:34.399168   40267 command_runner.go:130] > # minutes.
	I0812 11:12:34.399177   40267 command_runner.go:130] > # stream_tls_cert = ""
	I0812 11:12:34.399190   40267 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0812 11:12:34.399202   40267 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0812 11:12:34.399212   40267 command_runner.go:130] > # stream_tls_key = ""
	I0812 11:12:34.399225   40267 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0812 11:12:34.399237   40267 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0812 11:12:34.399265   40267 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0812 11:12:34.399275   40267 command_runner.go:130] > # stream_tls_ca = ""
	I0812 11:12:34.399289   40267 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 11:12:34.399299   40267 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0812 11:12:34.399313   40267 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 11:12:34.399323   40267 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0812 11:12:34.399333   40267 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0812 11:12:34.399351   40267 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0812 11:12:34.399359   40267 command_runner.go:130] > [crio.runtime]
	I0812 11:12:34.399375   40267 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0812 11:12:34.399388   40267 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0812 11:12:34.399404   40267 command_runner.go:130] > # "nofile=1024:2048"
	I0812 11:12:34.399417   40267 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0812 11:12:34.399424   40267 command_runner.go:130] > # default_ulimits = [
	I0812 11:12:34.399433   40267 command_runner.go:130] > # ]
	I0812 11:12:34.399444   40267 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0812 11:12:34.399452   40267 command_runner.go:130] > # no_pivot = false
	I0812 11:12:34.399463   40267 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0812 11:12:34.399476   40267 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0812 11:12:34.399488   40267 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0812 11:12:34.399500   40267 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0812 11:12:34.399509   40267 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0812 11:12:34.399522   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 11:12:34.399533   40267 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0812 11:12:34.399542   40267 command_runner.go:130] > # Cgroup setting for conmon
	I0812 11:12:34.399554   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0812 11:12:34.399563   40267 command_runner.go:130] > conmon_cgroup = "pod"
	I0812 11:12:34.399576   40267 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0812 11:12:34.399595   40267 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0812 11:12:34.399609   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 11:12:34.399618   40267 command_runner.go:130] > conmon_env = [
	I0812 11:12:34.399630   40267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 11:12:34.399638   40267 command_runner.go:130] > ]
	I0812 11:12:34.399648   40267 command_runner.go:130] > # Additional environment variables to set for all the
	I0812 11:12:34.399659   40267 command_runner.go:130] > # containers. These are overridden if set in the
	I0812 11:12:34.399671   40267 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0812 11:12:34.399680   40267 command_runner.go:130] > # default_env = [
	I0812 11:12:34.399686   40267 command_runner.go:130] > # ]
	I0812 11:12:34.399697   40267 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0812 11:12:34.399712   40267 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0812 11:12:34.399722   40267 command_runner.go:130] > # selinux = false
	I0812 11:12:34.399734   40267 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0812 11:12:34.399747   40267 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0812 11:12:34.399758   40267 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0812 11:12:34.399766   40267 command_runner.go:130] > # seccomp_profile = ""
	I0812 11:12:34.399779   40267 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0812 11:12:34.399791   40267 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0812 11:12:34.399811   40267 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0812 11:12:34.399827   40267 command_runner.go:130] > # which might increase security.
	I0812 11:12:34.399839   40267 command_runner.go:130] > # This option is currently deprecated,
	I0812 11:12:34.399850   40267 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0812 11:12:34.399861   40267 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0812 11:12:34.399874   40267 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0812 11:12:34.399888   40267 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0812 11:12:34.399901   40267 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0812 11:12:34.399913   40267 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0812 11:12:34.399923   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.399931   40267 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0812 11:12:34.399944   40267 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0812 11:12:34.399954   40267 command_runner.go:130] > # the cgroup blockio controller.
	I0812 11:12:34.399962   40267 command_runner.go:130] > # blockio_config_file = ""
	I0812 11:12:34.399976   40267 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0812 11:12:34.399986   40267 command_runner.go:130] > # blockio parameters.
	I0812 11:12:34.399995   40267 command_runner.go:130] > # blockio_reload = false
	I0812 11:12:34.400006   40267 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0812 11:12:34.400028   40267 command_runner.go:130] > # irqbalance daemon.
	I0812 11:12:34.400040   40267 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0812 11:12:34.400053   40267 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0812 11:12:34.400067   40267 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0812 11:12:34.400081   40267 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0812 11:12:34.400094   40267 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0812 11:12:34.400106   40267 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0812 11:12:34.400116   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.400125   40267 command_runner.go:130] > # rdt_config_file = ""
	I0812 11:12:34.400136   40267 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0812 11:12:34.400146   40267 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0812 11:12:34.400188   40267 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0812 11:12:34.400201   40267 command_runner.go:130] > # separate_pull_cgroup = ""
	I0812 11:12:34.400212   40267 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0812 11:12:34.400225   40267 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0812 11:12:34.400235   40267 command_runner.go:130] > # will be added.
	I0812 11:12:34.400244   40267 command_runner.go:130] > # default_capabilities = [
	I0812 11:12:34.400251   40267 command_runner.go:130] > # 	"CHOWN",
	I0812 11:12:34.400268   40267 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0812 11:12:34.400278   40267 command_runner.go:130] > # 	"FSETID",
	I0812 11:12:34.400285   40267 command_runner.go:130] > # 	"FOWNER",
	I0812 11:12:34.400292   40267 command_runner.go:130] > # 	"SETGID",
	I0812 11:12:34.400301   40267 command_runner.go:130] > # 	"SETUID",
	I0812 11:12:34.400308   40267 command_runner.go:130] > # 	"SETPCAP",
	I0812 11:12:34.400318   40267 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0812 11:12:34.400326   40267 command_runner.go:130] > # 	"KILL",
	I0812 11:12:34.400334   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400351   40267 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0812 11:12:34.400362   40267 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0812 11:12:34.400371   40267 command_runner.go:130] > # add_inheritable_capabilities = false
	I0812 11:12:34.400384   40267 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0812 11:12:34.400397   40267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 11:12:34.400407   40267 command_runner.go:130] > default_sysctls = [
	I0812 11:12:34.400417   40267 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0812 11:12:34.400424   40267 command_runner.go:130] > ]
	I0812 11:12:34.400433   40267 command_runner.go:130] > # List of devices on the host that a
	I0812 11:12:34.400446   40267 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0812 11:12:34.400455   40267 command_runner.go:130] > # allowed_devices = [
	I0812 11:12:34.400462   40267 command_runner.go:130] > # 	"/dev/fuse",
	I0812 11:12:34.400470   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400478   40267 command_runner.go:130] > # List of additional devices. specified as
	I0812 11:12:34.400493   40267 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0812 11:12:34.400505   40267 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0812 11:12:34.400517   40267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 11:12:34.400526   40267 command_runner.go:130] > # additional_devices = [
	I0812 11:12:34.400532   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400542   40267 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0812 11:12:34.400552   40267 command_runner.go:130] > # cdi_spec_dirs = [
	I0812 11:12:34.400560   40267 command_runner.go:130] > # 	"/etc/cdi",
	I0812 11:12:34.400568   40267 command_runner.go:130] > # 	"/var/run/cdi",
	I0812 11:12:34.400574   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400587   40267 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0812 11:12:34.400600   40267 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0812 11:12:34.400608   40267 command_runner.go:130] > # Defaults to false.
	I0812 11:12:34.400626   40267 command_runner.go:130] > # device_ownership_from_security_context = false
	I0812 11:12:34.400640   40267 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0812 11:12:34.400652   40267 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0812 11:12:34.400662   40267 command_runner.go:130] > # hooks_dir = [
	I0812 11:12:34.400671   40267 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0812 11:12:34.400679   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400689   40267 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0812 11:12:34.400702   40267 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0812 11:12:34.400711   40267 command_runner.go:130] > # its default mounts from the following two files:
	I0812 11:12:34.400719   40267 command_runner.go:130] > #
	I0812 11:12:34.400730   40267 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0812 11:12:34.400743   40267 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0812 11:12:34.400756   40267 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0812 11:12:34.400763   40267 command_runner.go:130] > #
	I0812 11:12:34.400781   40267 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0812 11:12:34.400794   40267 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0812 11:12:34.400805   40267 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0812 11:12:34.400816   40267 command_runner.go:130] > #      only add mounts it finds in this file.
	I0812 11:12:34.400821   40267 command_runner.go:130] > #
	I0812 11:12:34.400828   40267 command_runner.go:130] > # default_mounts_file = ""
	I0812 11:12:34.400838   40267 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0812 11:12:34.400852   40267 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0812 11:12:34.400862   40267 command_runner.go:130] > pids_limit = 1024
	I0812 11:12:34.400879   40267 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0812 11:12:34.400892   40267 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0812 11:12:34.400906   40267 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0812 11:12:34.400922   40267 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0812 11:12:34.400931   40267 command_runner.go:130] > # log_size_max = -1
	I0812 11:12:34.400943   40267 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0812 11:12:34.400952   40267 command_runner.go:130] > # log_to_journald = false
	I0812 11:12:34.400963   40267 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0812 11:12:34.400974   40267 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0812 11:12:34.400984   40267 command_runner.go:130] > # Path to directory for container attach sockets.
	I0812 11:12:34.400994   40267 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0812 11:12:34.401007   40267 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0812 11:12:34.401016   40267 command_runner.go:130] > # bind_mount_prefix = ""
	I0812 11:12:34.401035   40267 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0812 11:12:34.401044   40267 command_runner.go:130] > # read_only = false
	I0812 11:12:34.401055   40267 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0812 11:12:34.401068   40267 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0812 11:12:34.401078   40267 command_runner.go:130] > # live configuration reload.
	I0812 11:12:34.401088   40267 command_runner.go:130] > # log_level = "info"
	I0812 11:12:34.401098   40267 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0812 11:12:34.401109   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.401116   40267 command_runner.go:130] > # log_filter = ""
	I0812 11:12:34.401129   40267 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0812 11:12:34.401143   40267 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0812 11:12:34.401152   40267 command_runner.go:130] > # separated by comma.
	I0812 11:12:34.401165   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401175   40267 command_runner.go:130] > # uid_mappings = ""
	I0812 11:12:34.401188   40267 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0812 11:12:34.401201   40267 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0812 11:12:34.401208   40267 command_runner.go:130] > # separated by comma.
	I0812 11:12:34.401221   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401230   40267 command_runner.go:130] > # gid_mappings = ""
	I0812 11:12:34.401240   40267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0812 11:12:34.401253   40267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 11:12:34.401265   40267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 11:12:34.401281   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401290   40267 command_runner.go:130] > # minimum_mappable_uid = -1
	I0812 11:12:34.401301   40267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0812 11:12:34.401318   40267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 11:12:34.401331   40267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 11:12:34.401350   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401361   40267 command_runner.go:130] > # minimum_mappable_gid = -1
	I0812 11:12:34.401372   40267 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0812 11:12:34.401384   40267 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0812 11:12:34.401396   40267 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0812 11:12:34.401406   40267 command_runner.go:130] > # ctr_stop_timeout = 30
	I0812 11:12:34.401419   40267 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0812 11:12:34.401431   40267 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0812 11:12:34.401443   40267 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0812 11:12:34.401459   40267 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0812 11:12:34.401469   40267 command_runner.go:130] > drop_infra_ctr = false
	I0812 11:12:34.401480   40267 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0812 11:12:34.401492   40267 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0812 11:12:34.401506   40267 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0812 11:12:34.401516   40267 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0812 11:12:34.401529   40267 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0812 11:12:34.401541   40267 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0812 11:12:34.401551   40267 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0812 11:12:34.401563   40267 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0812 11:12:34.401572   40267 command_runner.go:130] > # shared_cpuset = ""
	I0812 11:12:34.401583   40267 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0812 11:12:34.401594   40267 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0812 11:12:34.401605   40267 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0812 11:12:34.401620   40267 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0812 11:12:34.401629   40267 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0812 11:12:34.401637   40267 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0812 11:12:34.401651   40267 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0812 11:12:34.401661   40267 command_runner.go:130] > # enable_criu_support = false
	I0812 11:12:34.401673   40267 command_runner.go:130] > # Enable/disable the generation of the container,
	I0812 11:12:34.401683   40267 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0812 11:12:34.401693   40267 command_runner.go:130] > # enable_pod_events = false
	I0812 11:12:34.401705   40267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 11:12:34.401718   40267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 11:12:34.401728   40267 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0812 11:12:34.401736   40267 command_runner.go:130] > # default_runtime = "runc"
	I0812 11:12:34.401747   40267 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0812 11:12:34.401762   40267 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0812 11:12:34.401779   40267 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0812 11:12:34.401791   40267 command_runner.go:130] > # creation as a file is not desired either.
	I0812 11:12:34.401808   40267 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0812 11:12:34.401819   40267 command_runner.go:130] > # the hostname is being managed dynamically.
	I0812 11:12:34.401827   40267 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0812 11:12:34.401835   40267 command_runner.go:130] > # ]
	I0812 11:12:34.401846   40267 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0812 11:12:34.401860   40267 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0812 11:12:34.401881   40267 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0812 11:12:34.401893   40267 command_runner.go:130] > # Each entry in the table should follow the format:
	I0812 11:12:34.401901   40267 command_runner.go:130] > #
	I0812 11:12:34.401908   40267 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0812 11:12:34.401918   40267 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0812 11:12:34.401975   40267 command_runner.go:130] > # runtime_type = "oci"
	I0812 11:12:34.401985   40267 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0812 11:12:34.401993   40267 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0812 11:12:34.401999   40267 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0812 11:12:34.402006   40267 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0812 11:12:34.402015   40267 command_runner.go:130] > # monitor_env = []
	I0812 11:12:34.402024   40267 command_runner.go:130] > # privileged_without_host_devices = false
	I0812 11:12:34.402034   40267 command_runner.go:130] > # allowed_annotations = []
	I0812 11:12:34.402045   40267 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0812 11:12:34.402054   40267 command_runner.go:130] > # Where:
	I0812 11:12:34.402064   40267 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0812 11:12:34.402076   40267 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0812 11:12:34.402088   40267 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0812 11:12:34.402100   40267 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0812 11:12:34.402108   40267 command_runner.go:130] > #   in $PATH.
	I0812 11:12:34.402119   40267 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0812 11:12:34.402129   40267 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0812 11:12:34.402140   40267 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0812 11:12:34.402149   40267 command_runner.go:130] > #   state.
	I0812 11:12:34.402159   40267 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0812 11:12:34.402172   40267 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0812 11:12:34.402183   40267 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0812 11:12:34.402195   40267 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0812 11:12:34.402207   40267 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0812 11:12:34.402221   40267 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0812 11:12:34.402232   40267 command_runner.go:130] > #   The currently recognized values are:
	I0812 11:12:34.402247   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0812 11:12:34.402261   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0812 11:12:34.402274   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0812 11:12:34.402287   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0812 11:12:34.402301   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0812 11:12:34.402321   40267 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0812 11:12:34.402335   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0812 11:12:34.402352   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0812 11:12:34.402366   40267 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0812 11:12:34.402379   40267 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0812 11:12:34.402389   40267 command_runner.go:130] > #   deprecated option "conmon".
	I0812 11:12:34.402404   40267 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0812 11:12:34.402415   40267 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0812 11:12:34.402430   40267 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0812 11:12:34.402441   40267 command_runner.go:130] > #   should be moved to the container's cgroup
	I0812 11:12:34.402455   40267 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0812 11:12:34.402466   40267 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0812 11:12:34.402478   40267 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0812 11:12:34.402489   40267 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0812 11:12:34.402496   40267 command_runner.go:130] > #
	I0812 11:12:34.402504   40267 command_runner.go:130] > # Using the seccomp notifier feature:
	I0812 11:12:34.402511   40267 command_runner.go:130] > #
	I0812 11:12:34.402521   40267 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0812 11:12:34.402535   40267 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0812 11:12:34.402543   40267 command_runner.go:130] > #
	I0812 11:12:34.402554   40267 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0812 11:12:34.402567   40267 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0812 11:12:34.402575   40267 command_runner.go:130] > #
	I0812 11:12:34.402585   40267 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0812 11:12:34.402594   40267 command_runner.go:130] > # feature.
	I0812 11:12:34.402599   40267 command_runner.go:130] > #
	I0812 11:12:34.402609   40267 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0812 11:12:34.402623   40267 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0812 11:12:34.402635   40267 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0812 11:12:34.402649   40267 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0812 11:12:34.402661   40267 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0812 11:12:34.402669   40267 command_runner.go:130] > #
	I0812 11:12:34.402679   40267 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0812 11:12:34.402692   40267 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0812 11:12:34.402700   40267 command_runner.go:130] > #
	I0812 11:12:34.402710   40267 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0812 11:12:34.402728   40267 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0812 11:12:34.402744   40267 command_runner.go:130] > #
	I0812 11:12:34.402755   40267 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0812 11:12:34.402766   40267 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0812 11:12:34.402775   40267 command_runner.go:130] > # limitation.
	I0812 11:12:34.402785   40267 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0812 11:12:34.402796   40267 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0812 11:12:34.402803   40267 command_runner.go:130] > runtime_type = "oci"
	I0812 11:12:34.402811   40267 command_runner.go:130] > runtime_root = "/run/runc"
	I0812 11:12:34.402819   40267 command_runner.go:130] > runtime_config_path = ""
	I0812 11:12:34.402827   40267 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0812 11:12:34.402836   40267 command_runner.go:130] > monitor_cgroup = "pod"
	I0812 11:12:34.402843   40267 command_runner.go:130] > monitor_exec_cgroup = ""
	I0812 11:12:34.402853   40267 command_runner.go:130] > monitor_env = [
	I0812 11:12:34.402863   40267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 11:12:34.402870   40267 command_runner.go:130] > ]
	I0812 11:12:34.402879   40267 command_runner.go:130] > privileged_without_host_devices = false
	I0812 11:12:34.402892   40267 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0812 11:12:34.402903   40267 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0812 11:12:34.402916   40267 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0812 11:12:34.402931   40267 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0812 11:12:34.402945   40267 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0812 11:12:34.402958   40267 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0812 11:12:34.402975   40267 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0812 11:12:34.402989   40267 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0812 11:12:34.402997   40267 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0812 11:12:34.403005   40267 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0812 11:12:34.403011   40267 command_runner.go:130] > # Example:
	I0812 11:12:34.403018   40267 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0812 11:12:34.403025   40267 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0812 11:12:34.403033   40267 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0812 11:12:34.403042   40267 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0812 11:12:34.403049   40267 command_runner.go:130] > # cpuset = 0
	I0812 11:12:34.403056   40267 command_runner.go:130] > # cpushares = "0-1"
	I0812 11:12:34.403061   40267 command_runner.go:130] > # Where:
	I0812 11:12:34.403068   40267 command_runner.go:130] > # The workload name is workload-type.
	I0812 11:12:34.403087   40267 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0812 11:12:34.403097   40267 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0812 11:12:34.403106   40267 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0812 11:12:34.403119   40267 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0812 11:12:34.403128   40267 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0812 11:12:34.403137   40267 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0812 11:12:34.403147   40267 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0812 11:12:34.403155   40267 command_runner.go:130] > # Default value is set to true
	I0812 11:12:34.403162   40267 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0812 11:12:34.403170   40267 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0812 11:12:34.403178   40267 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0812 11:12:34.403186   40267 command_runner.go:130] > # Default value is set to 'false'
	I0812 11:12:34.403193   40267 command_runner.go:130] > # disable_hostport_mapping = false
	I0812 11:12:34.403202   40267 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0812 11:12:34.403210   40267 command_runner.go:130] > #
	I0812 11:12:34.403220   40267 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0812 11:12:34.403233   40267 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0812 11:12:34.403246   40267 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0812 11:12:34.403263   40267 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0812 11:12:34.403279   40267 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0812 11:12:34.403287   40267 command_runner.go:130] > [crio.image]
	I0812 11:12:34.403297   40267 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0812 11:12:34.403307   40267 command_runner.go:130] > # default_transport = "docker://"
	I0812 11:12:34.403320   40267 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0812 11:12:34.403334   40267 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0812 11:12:34.403344   40267 command_runner.go:130] > # global_auth_file = ""
	I0812 11:12:34.403360   40267 command_runner.go:130] > # The image used to instantiate infra containers.
	I0812 11:12:34.403372   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.403384   40267 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0812 11:12:34.403397   40267 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0812 11:12:34.403410   40267 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0812 11:12:34.403422   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.403432   40267 command_runner.go:130] > # pause_image_auth_file = ""
	I0812 11:12:34.403445   40267 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0812 11:12:34.403458   40267 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0812 11:12:34.403468   40267 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0812 11:12:34.403487   40267 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0812 11:12:34.403498   40267 command_runner.go:130] > # pause_command = "/pause"
	I0812 11:12:34.403510   40267 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0812 11:12:34.403524   40267 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0812 11:12:34.403537   40267 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0812 11:12:34.403554   40267 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0812 11:12:34.403567   40267 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0812 11:12:34.403579   40267 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0812 11:12:34.403590   40267 command_runner.go:130] > # pinned_images = [
	I0812 11:12:34.403596   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403608   40267 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0812 11:12:34.403621   40267 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0812 11:12:34.403635   40267 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0812 11:12:34.403648   40267 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0812 11:12:34.403669   40267 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0812 11:12:34.403677   40267 command_runner.go:130] > # signature_policy = ""
	I0812 11:12:34.403687   40267 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0812 11:12:34.403702   40267 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0812 11:12:34.403715   40267 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0812 11:12:34.403729   40267 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0812 11:12:34.403742   40267 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0812 11:12:34.403753   40267 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0812 11:12:34.403765   40267 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0812 11:12:34.403776   40267 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0812 11:12:34.403785   40267 command_runner.go:130] > # changing them here.
	I0812 11:12:34.403793   40267 command_runner.go:130] > # insecure_registries = [
	I0812 11:12:34.403801   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403812   40267 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0812 11:12:34.403823   40267 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0812 11:12:34.403833   40267 command_runner.go:130] > # image_volumes = "mkdir"
	I0812 11:12:34.403842   40267 command_runner.go:130] > # Temporary directory to use for storing big files
	I0812 11:12:34.403852   40267 command_runner.go:130] > # big_files_temporary_dir = ""
	I0812 11:12:34.403863   40267 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0812 11:12:34.403876   40267 command_runner.go:130] > # CNI plugins.
	I0812 11:12:34.403884   40267 command_runner.go:130] > [crio.network]
	I0812 11:12:34.403895   40267 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0812 11:12:34.403914   40267 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0812 11:12:34.403924   40267 command_runner.go:130] > # cni_default_network = ""
	I0812 11:12:34.403935   40267 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0812 11:12:34.403943   40267 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0812 11:12:34.403955   40267 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0812 11:12:34.403964   40267 command_runner.go:130] > # plugin_dirs = [
	I0812 11:12:34.403972   40267 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0812 11:12:34.403980   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403990   40267 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0812 11:12:34.403999   40267 command_runner.go:130] > [crio.metrics]
	I0812 11:12:34.404007   40267 command_runner.go:130] > # Globally enable or disable metrics support.
	I0812 11:12:34.404017   40267 command_runner.go:130] > enable_metrics = true
	I0812 11:12:34.404025   40267 command_runner.go:130] > # Specify enabled metrics collectors.
	I0812 11:12:34.404033   40267 command_runner.go:130] > # Per default all metrics are enabled.
	I0812 11:12:34.404046   40267 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0812 11:12:34.404059   40267 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0812 11:12:34.404071   40267 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0812 11:12:34.404080   40267 command_runner.go:130] > # metrics_collectors = [
	I0812 11:12:34.404088   40267 command_runner.go:130] > # 	"operations",
	I0812 11:12:34.404100   40267 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0812 11:12:34.404107   40267 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0812 11:12:34.404114   40267 command_runner.go:130] > # 	"operations_errors",
	I0812 11:12:34.404124   40267 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0812 11:12:34.404133   40267 command_runner.go:130] > # 	"image_pulls_by_name",
	I0812 11:12:34.404142   40267 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0812 11:12:34.404151   40267 command_runner.go:130] > # 	"image_pulls_failures",
	I0812 11:12:34.404161   40267 command_runner.go:130] > # 	"image_pulls_successes",
	I0812 11:12:34.404170   40267 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0812 11:12:34.404179   40267 command_runner.go:130] > # 	"image_layer_reuse",
	I0812 11:12:34.404188   40267 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0812 11:12:34.404197   40267 command_runner.go:130] > # 	"containers_oom_total",
	I0812 11:12:34.404205   40267 command_runner.go:130] > # 	"containers_oom",
	I0812 11:12:34.404214   40267 command_runner.go:130] > # 	"processes_defunct",
	I0812 11:12:34.404222   40267 command_runner.go:130] > # 	"operations_total",
	I0812 11:12:34.404230   40267 command_runner.go:130] > # 	"operations_latency_seconds",
	I0812 11:12:34.404240   40267 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0812 11:12:34.404256   40267 command_runner.go:130] > # 	"operations_errors_total",
	I0812 11:12:34.404267   40267 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0812 11:12:34.404277   40267 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0812 11:12:34.404286   40267 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0812 11:12:34.404294   40267 command_runner.go:130] > # 	"image_pulls_success_total",
	I0812 11:12:34.404303   40267 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0812 11:12:34.404312   40267 command_runner.go:130] > # 	"containers_oom_count_total",
	I0812 11:12:34.404321   40267 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0812 11:12:34.404331   40267 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0812 11:12:34.404337   40267 command_runner.go:130] > # ]
	I0812 11:12:34.404351   40267 command_runner.go:130] > # The port on which the metrics server will listen.
	I0812 11:12:34.404361   40267 command_runner.go:130] > # metrics_port = 9090
	I0812 11:12:34.404371   40267 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0812 11:12:34.404381   40267 command_runner.go:130] > # metrics_socket = ""
	I0812 11:12:34.404390   40267 command_runner.go:130] > # The certificate for the secure metrics server.
	I0812 11:12:34.404402   40267 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0812 11:12:34.404413   40267 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0812 11:12:34.404424   40267 command_runner.go:130] > # certificate on any modification event.
	I0812 11:12:34.404434   40267 command_runner.go:130] > # metrics_cert = ""
	I0812 11:12:34.404446   40267 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0812 11:12:34.404457   40267 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0812 11:12:34.404465   40267 command_runner.go:130] > # metrics_key = ""
	I0812 11:12:34.404475   40267 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0812 11:12:34.404483   40267 command_runner.go:130] > [crio.tracing]
	I0812 11:12:34.404492   40267 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0812 11:12:34.404501   40267 command_runner.go:130] > # enable_tracing = false
	I0812 11:12:34.404511   40267 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0812 11:12:34.404521   40267 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0812 11:12:34.404536   40267 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0812 11:12:34.404547   40267 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0812 11:12:34.404555   40267 command_runner.go:130] > # CRI-O NRI configuration.
	I0812 11:12:34.404562   40267 command_runner.go:130] > [crio.nri]
	I0812 11:12:34.404569   40267 command_runner.go:130] > # Globally enable or disable NRI.
	I0812 11:12:34.404576   40267 command_runner.go:130] > # enable_nri = false
	I0812 11:12:34.404586   40267 command_runner.go:130] > # NRI socket to listen on.
	I0812 11:12:34.404595   40267 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0812 11:12:34.404609   40267 command_runner.go:130] > # NRI plugin directory to use.
	I0812 11:12:34.404621   40267 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0812 11:12:34.404630   40267 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0812 11:12:34.404641   40267 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0812 11:12:34.404653   40267 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0812 11:12:34.404662   40267 command_runner.go:130] > # nri_disable_connections = false
	I0812 11:12:34.404672   40267 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0812 11:12:34.404682   40267 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0812 11:12:34.404692   40267 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0812 11:12:34.404703   40267 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0812 11:12:34.404716   40267 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0812 11:12:34.404724   40267 command_runner.go:130] > [crio.stats]
	I0812 11:12:34.404735   40267 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0812 11:12:34.404747   40267 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0812 11:12:34.404756   40267 command_runner.go:130] > # stats_collection_period = 0
	I0812 11:12:34.404952   40267 cni.go:84] Creating CNI manager for ""
	I0812 11:12:34.404969   40267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 11:12:34.404983   40267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:12:34.405021   40267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-053297 NodeName:multinode-053297 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:12:34.405189   40267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-053297"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:12:34.405269   40267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:12:34.415663   40267 command_runner.go:130] > kubeadm
	I0812 11:12:34.415686   40267 command_runner.go:130] > kubectl
	I0812 11:12:34.415692   40267 command_runner.go:130] > kubelet
	I0812 11:12:34.415760   40267 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:12:34.415816   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:12:34.426102   40267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0812 11:12:34.444340   40267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:12:34.461435   40267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 11:12:34.477748   40267 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0812 11:12:34.481459   40267 command_runner.go:130] > 192.168.39.95	control-plane.minikube.internal
	I0812 11:12:34.481631   40267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:12:34.628531   40267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:12:34.643291   40267 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297 for IP: 192.168.39.95
	I0812 11:12:34.643315   40267 certs.go:194] generating shared ca certs ...
	I0812 11:12:34.643330   40267 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:12:34.643505   40267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:12:34.643548   40267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:12:34.643557   40267 certs.go:256] generating profile certs ...
	I0812 11:12:34.643630   40267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/client.key
	I0812 11:12:34.643687   40267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key.345acae3
	I0812 11:12:34.643730   40267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key
	I0812 11:12:34.643742   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 11:12:34.643756   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 11:12:34.643768   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 11:12:34.643780   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 11:12:34.643794   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 11:12:34.643812   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 11:12:34.643823   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 11:12:34.643845   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 11:12:34.643899   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:12:34.643926   40267 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:12:34.643935   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:12:34.643955   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:12:34.643978   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:12:34.643998   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:12:34.644033   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:12:34.644059   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.644071   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.644083   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 11:12:34.644731   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:12:34.668639   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:12:34.691797   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:12:34.715657   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:12:34.740103   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 11:12:34.763296   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:12:34.786358   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:12:34.811844   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:12:34.834711   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:12:34.857318   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:12:34.880290   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:12:34.903578   40267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:12:34.919797   40267 ssh_runner.go:195] Run: openssl version
	I0812 11:12:34.925722   40267 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0812 11:12:34.925824   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:12:34.937142   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941727   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941762   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941845   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.947460   40267 command_runner.go:130] > b5213941
	I0812 11:12:34.947538   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:12:34.956927   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:12:34.967735   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.972966   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.973010   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.973061   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.978934   40267 command_runner.go:130] > 51391683
	I0812 11:12:34.979082   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:12:34.989945   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:12:35.001965   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006644   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006702   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006759   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.012247   40267 command_runner.go:130] > 3ec20f2e
	I0812 11:12:35.012306   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:12:35.021891   40267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:12:35.026553   40267 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:12:35.026581   40267 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0812 11:12:35.026587   40267 command_runner.go:130] > Device: 253,1	Inode: 3150891     Links: 1
	I0812 11:12:35.026593   40267 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 11:12:35.026600   40267 command_runner.go:130] > Access: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026604   40267 command_runner.go:130] > Modify: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026609   40267 command_runner.go:130] > Change: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026614   40267 command_runner.go:130] >  Birth: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026672   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:12:35.032234   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.032341   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:12:35.037810   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.037891   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:12:35.043494   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.043594   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:12:35.049113   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.049198   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:12:35.054513   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.054644   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:12:35.059898   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.060063   40267 kubeadm.go:392] StartCluster: {Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:12:35.060168   40267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:12:35.060225   40267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:12:35.097432   40267 command_runner.go:130] > 0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc
	I0812 11:12:35.097466   40267 command_runner.go:130] > 3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af
	I0812 11:12:35.097473   40267 command_runner.go:130] > a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec
	I0812 11:12:35.097480   40267 command_runner.go:130] > 8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229
	I0812 11:12:35.097486   40267 command_runner.go:130] > 8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e
	I0812 11:12:35.097492   40267 command_runner.go:130] > 7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e
	I0812 11:12:35.097497   40267 command_runner.go:130] > 09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9
	I0812 11:12:35.097504   40267 command_runner.go:130] > 87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1
	I0812 11:12:35.097525   40267 cri.go:89] found id: "0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc"
	I0812 11:12:35.097535   40267 cri.go:89] found id: "3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af"
	I0812 11:12:35.097540   40267 cri.go:89] found id: "a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec"
	I0812 11:12:35.097545   40267 cri.go:89] found id: "8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229"
	I0812 11:12:35.097552   40267 cri.go:89] found id: "8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e"
	I0812 11:12:35.097556   40267 cri.go:89] found id: "7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e"
	I0812 11:12:35.097558   40267 cri.go:89] found id: "09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9"
	I0812 11:12:35.097561   40267 cri.go:89] found id: "87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1"
	I0812 11:12:35.097564   40267 cri.go:89] found id: ""
	I0812 11:12:35.097606   40267 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.315494332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461261315459529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82be8ff5-526e-43ec-85cc-007b2e19f059 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.315989237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3ac1838-7846-4ebc-b1b1-5f446464ff3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.316042793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3ac1838-7846-4ebc-b1b1-5f446464ff3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.316384174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3ac1838-7846-4ebc-b1b1-5f446464ff3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.357053346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40e32d6e-3dee-4caa-8dbc-2af3ff646511 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.357138164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40e32d6e-3dee-4caa-8dbc-2af3ff646511 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.358329195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ae48667-b1a4-49e8-8e77-bb8f8b704549 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.358755550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461261358733242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ae48667-b1a4-49e8-8e77-bb8f8b704549 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.359476553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39593391-c1a9-4ea4-abc6-acef2eb16e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.359536477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39593391-c1a9-4ea4-abc6-acef2eb16e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.359930867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39593391-c1a9-4ea4-abc6-acef2eb16e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.409447406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2731622-22a1-4aaa-9c37-429e2680b3fd name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.409531595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2731622-22a1-4aaa-9c37-429e2680b3fd name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.411651144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e753b753-ad9c-4fa8-abe3-353d1e0a3222 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.412435542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461261412396825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e753b753-ad9c-4fa8-abe3-353d1e0a3222 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.413072533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e1be31c-2db4-4fa5-b2a5-77ec5d047375 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.413133644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e1be31c-2db4-4fa5-b2a5-77ec5d047375 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.415629314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e1be31c-2db4-4fa5-b2a5-77ec5d047375 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.463230866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fec9397-5afe-4119-83c7-8d9a61d26803 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.463307198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fec9397-5afe-4119-83c7-8d9a61d26803 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.464785158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9d7efa9-8bd6-4267-802e-65d7c27512b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.465290943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461261465265092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9d7efa9-8bd6-4267-802e-65d7c27512b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.465902199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb00afe1-4fd7-42de-8010-b9f788bb9868 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.465962916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb00afe1-4fd7-42de-8010-b9f788bb9868 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:14:21 multinode-053297 crio[2908]: time="2024-08-12 11:14:21.466299310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb00afe1-4fd7-42de-8010-b9f788bb9868 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a238277bdd584       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   5046a74c1c712       busybox-fc5497c4f-242jl
	e6c2e77ab819a       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   72b35edc9899b       kindnet-t65tb
	41532e164787b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   e1a03a69e69c1       coredns-7db6d8ff4d-gs2rm
	a241641b58e72       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   3be9ca7cc9a86       storage-provisioner
	de11bd5fb35f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   a6d88ae6d0138       kube-proxy-9c48w
	1a863a59bad8c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   59ccbd6d89362       kube-scheduler-multinode-053297
	bca9095639889       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   32371f054a996       kube-controller-manager-multinode-053297
	5f719b2750bca       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   279b3e1fb216c       kube-apiserver-multinode-053297
	132b15b2d16e0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   c84e5be14b248       etcd-multinode-053297
	1820e892790ef       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   a2efa8f2392f6       busybox-fc5497c4f-242jl
	0971024fe2a93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   d356d9ef0c3e6       coredns-7db6d8ff4d-gs2rm
	3ed6125dc9e3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   c56d1dff8718d       storage-provisioner
	a911be0f14009       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago        Exited              kindnet-cni               0                   96d6ebf847ab7       kindnet-t65tb
	8f04ca85ef866       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   f401767a9adec       kube-proxy-9c48w
	8d101e8240261       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   7b56483787489       etcd-multinode-053297
	7e98b01dde217       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   1fa45813f29d1       kube-scheduler-multinode-053297
	09a8e5a83ca16       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   258e6b42c633c       kube-apiserver-multinode-053297
	87e5feab93ae2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   a32608bd26a7f       kube-controller-manager-multinode-053297
	
	
	==> coredns [0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc] <==
	[INFO] 10.244.1.2:56809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139015s
	[INFO] 10.244.1.2:41138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152623s
	[INFO] 10.244.1.2:56293 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075426s
	[INFO] 10.244.1.2:35681 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539289s
	[INFO] 10.244.1.2:49715 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062349s
	[INFO] 10.244.1.2:57037 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077923s
	[INFO] 10.244.1.2:56569 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063543s
	[INFO] 10.244.0.3:45509 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078679s
	[INFO] 10.244.0.3:47636 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000038773s
	[INFO] 10.244.0.3:36470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034693s
	[INFO] 10.244.0.3:51400 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041683s
	[INFO] 10.244.1.2:38741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115896s
	[INFO] 10.244.1.2:47897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105871s
	[INFO] 10.244.1.2:34308 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088503s
	[INFO] 10.244.1.2:36210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006932s
	[INFO] 10.244.0.3:39563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087584s
	[INFO] 10.244.0.3:33056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000065255s
	[INFO] 10.244.0.3:57813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051078s
	[INFO] 10.244.0.3:40260 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070334s
	[INFO] 10.244.1.2:39761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141754s
	[INFO] 10.244.1.2:34700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077178s
	[INFO] 10.244.1.2:44691 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007049s
	[INFO] 10.244.1.2:50622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109723s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32805 - 60500 "HINFO IN 1183547355277371863.2435189660485626675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014960031s
	
	
	==> describe nodes <==
	Name:               multinode-053297
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-053297
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=multinode-053297
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_05_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:05:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-053297
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:14:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:06:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    multinode-053297
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9402e00ee03348edb40ff9f911ec78c9
	  System UUID:                9402e00e-e033-48ed-b40f-f9f911ec78c9
	  Boot ID:                    1e24d6d4-b18a-4791-90d4-b9c5725f429c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-242jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 coredns-7db6d8ff4d-gs2rm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 etcd-multinode-053297                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kindnet-t65tb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m23s
	  kube-system                 kube-apiserver-multinode-053297             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-controller-manager-multinode-053297    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-proxy-9c48w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-scheduler-multinode-053297             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m22s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m44s (x8 over 8m44s)  kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x8 over 8m44s)  kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x7 over 8m44s)  kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m38s                  kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m38s                  kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s                  kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m38s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m24s                  node-controller  Node multinode-053297 event: Registered Node multinode-053297 in Controller
	  Normal  NodeReady                8m8s                   kubelet          Node multinode-053297 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s (x8 over 105s)    kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 105s)    kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 105s)    kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           89s                    node-controller  Node multinode-053297 event: Registered Node multinode-053297 in Controller
	
	
	Name:               multinode-053297-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-053297-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=multinode-053297
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T11_13_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-053297-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:13:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:13:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:13:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:13:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    multinode-053297-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c7437527ba44e21af49c437482262f8
	  System UUID:                3c743752-7ba4-4e21-af49-c437482262f8
	  Boot ID:                    0f070c0c-689f-42cd-a17b-70c8ff293cd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hrnrt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-glm6n              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m36s
	  kube-system                 kube-proxy-wmdlz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m31s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m37s (x2 over 7m37s)  kubelet     Node multinode-053297-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x2 over 7m37s)  kubelet     Node multinode-053297-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x2 over 7m37s)  kubelet     Node multinode-053297-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m16s                  kubelet     Node multinode-053297-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-053297-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-053297-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-053297-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-053297-m02 status is now: NodeReady
	
	
	Name:               multinode-053297-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-053297-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=multinode-053297
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T11_13_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:13:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-053297-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:14:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:14:18 +0000   Mon, 12 Aug 2024 11:13:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:14:18 +0000   Mon, 12 Aug 2024 11:13:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:14:18 +0000   Mon, 12 Aug 2024 11:13:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:14:18 +0000   Mon, 12 Aug 2024 11:14:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    multinode-053297-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6442b4bfdbf8495b92aeceaa220f3615
	  System UUID:                6442b4bf-dbf8-495b-92ae-ceaa220f3615
	  Boot ID:                    32bd5701-05ca-4acb-8d80-8e44e2f7b865
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6nwk2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m42s
	  kube-system                 kube-proxy-d2j9k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m43s (x2 over 6m43s)  kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x2 over 6m43s)  kubelet          Node multinode-053297-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x2 over 6m43s)  kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m22s                  kubelet          Node multinode-053297-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-053297-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m33s                  kubelet          Node multinode-053297-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-053297-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-053297-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node multinode-053297-m03 event: Registered Node multinode-053297-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-053297-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.071181] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.200803] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.112004] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.274079] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.118414] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.020345] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.064595] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994625] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.070271] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.329425] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.354695] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[Aug12 11:06] kauditd_printk_skb: 60 callbacks suppressed
	[Aug12 11:07] kauditd_printk_skb: 14 callbacks suppressed
	[Aug12 11:12] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.156041] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.170886] systemd-fstab-generator[2854]: Ignoring "noauto" option for root device
	[  +0.153664] systemd-fstab-generator[2866]: Ignoring "noauto" option for root device
	[  +0.282629] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +8.567549] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[  +0.094114] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.071047] systemd-fstab-generator[3114]: Ignoring "noauto" option for root device
	[  +4.710370] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.500750] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.401140] systemd-fstab-generator[3946]: Ignoring "noauto" option for root device
	[Aug12 11:13] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334] <==
	{"level":"info","ts":"2024-08-12T11:12:37.888353Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T11:12:37.888462Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T11:12:37.900324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 switched to configuration voters=(47039837626653079)"}
	{"level":"info","ts":"2024-08-12T11:12:37.900463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","added-peer-id":"a71e7bac075997","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-12T11:12:37.900616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:12:37.90066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:12:37.913241Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T11:12:37.913362Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:12:37.913491Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:12:37.914298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a71e7bac075997","initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T11:12:37.914777Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T11:12:38.913871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgPreVoteResp from a71e7bac075997 at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.916667Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:multinode-053297 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:12:38.91672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:12:38.917244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:12:38.919562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.95:2379"}
	{"level":"info","ts":"2024-08-12T11:12:38.922696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:12:38.932884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:12:38.932933Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e] <==
	{"level":"info","ts":"2024-08-12T11:06:45.189717Z","caller":"traceutil/trace.go:171","msg":"trace[438084952] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"185.043578ms","start":"2024-08-12T11:06:45.004653Z","end":"2024-08-12T11:06:45.189696Z","steps":["trace[438084952] 'process raft request'  (duration: 184.964897ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:06:45.190026Z","caller":"traceutil/trace.go:171","msg":"trace[378657706] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"227.712542ms","start":"2024-08-12T11:06:44.962305Z","end":"2024-08-12T11:06:45.190018Z","steps":["trace[378657706] 'read index received'  (duration: 64.207681ms)","trace[378657706] 'applied index is now lower than readState.Index'  (duration: 163.50418ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T11:06:45.190212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.892087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:06:45.192138Z","caller":"traceutil/trace.go:171","msg":"trace[1306728832] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:498; }","duration":"229.836248ms","start":"2024-08-12T11:06:44.962281Z","end":"2024-08-12T11:06:45.192117Z","steps":["trace[1306728832] 'agreement among raft nodes before linearized reading'  (duration: 227.821754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:07:39.099264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.270024ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6455788321831307450 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-053297-m03.17eaf68513057458\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-053297-m03.17eaf68513057458\" value_size:642 lease:6455788321831307001 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-12T11:07:39.099559Z","caller":"traceutil/trace.go:171","msg":"trace[483768616] linearizableReadLoop","detail":"{readStateIndex:677; appliedIndex:675; }","duration":"135.205795ms","start":"2024-08-12T11:07:38.964326Z","end":"2024-08-12T11:07:39.099532Z","steps":["trace[483768616] 'read index received'  (duration: 133.17938ms)","trace[483768616] 'applied index is now lower than readState.Index'  (duration: 2.025783ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T11:07:39.099656Z","caller":"traceutil/trace.go:171","msg":"trace[1152437288] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"177.9057ms","start":"2024-08-12T11:07:38.921743Z","end":"2024-08-12T11:07:39.099649Z","steps":["trace[1152437288] 'process raft request'  (duration: 177.736824ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:07:39.099687Z","caller":"traceutil/trace.go:171","msg":"trace[1054135088] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"245.754476ms","start":"2024-08-12T11:07:38.853917Z","end":"2024-08-12T11:07:39.099671Z","steps":["trace[1054135088] 'process raft request'  (duration: 58.584548ms)","trace[1054135088] 'compare'  (duration: 186.187131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T11:07:39.09997Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.653176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:07:39.100029Z","caller":"traceutil/trace.go:171","msg":"trace[2070505261] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:635; }","duration":"135.733726ms","start":"2024-08-12T11:07:38.964282Z","end":"2024-08-12T11:07:39.100015Z","steps":["trace[2070505261] 'agreement among raft nodes before linearized reading'  (duration: 135.580265ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:07:47.060183Z","caller":"traceutil/trace.go:171","msg":"trace[1920807292] linearizableReadLoop","detail":"{readStateIndex:725; appliedIndex:724; }","duration":"215.381038ms","start":"2024-08-12T11:07:46.844779Z","end":"2024-08-12T11:07:47.06016Z","steps":["trace[1920807292] 'read index received'  (duration: 215.140908ms)","trace[1920807292] 'applied index is now lower than readState.Index'  (duration: 239.06µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T11:07:47.060278Z","caller":"traceutil/trace.go:171","msg":"trace[1939185951] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"229.732803ms","start":"2024-08-12T11:07:46.830536Z","end":"2024-08-12T11:07:47.060269Z","steps":["trace[1939185951] 'process raft request'  (duration: 229.429246ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:07:47.06076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.962054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-12T11:07:47.060839Z","caller":"traceutil/trace.go:171","msg":"trace[1526192585] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:678; }","duration":"216.074499ms","start":"2024-08-12T11:07:46.844754Z","end":"2024-08-12T11:07:47.060829Z","steps":["trace[1526192585] 'agreement among raft nodes before linearized reading'  (duration: 215.961305ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:08:33.417102Z","caller":"traceutil/trace.go:171","msg":"trace[1729162249] transaction","detail":"{read_only:false; response_revision:763; number_of_response:1; }","duration":"116.483812ms","start":"2024-08-12T11:08:33.300588Z","end":"2024-08-12T11:08:33.417072Z","steps":["trace[1729162249] 'process raft request'  (duration: 116.376117ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:10:53.754232Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T11:10:53.754351Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-053297","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	{"level":"warn","ts":"2024-08-12T11:10:53.754501Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.754605Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.833442Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.833485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T11:10:53.83356Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a71e7bac075997","current-leader-member-id":"a71e7bac075997"}
	{"level":"info","ts":"2024-08-12T11:10:53.836252Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:10:53.83641Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:10:53.836444Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-053297","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	
	
	==> kernel <==
	 11:14:22 up 9 min,  0 users,  load average: 0.25, 0.19, 0.11
	Linux multinode-053297 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec] <==
	I0812 11:10:13.581964       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:23.586195       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:23.586382       1 main.go:299] handling current node
	I0812 11:10:23.586422       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:23.586441       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:23.586625       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:23.586705       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:33.579030       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:33.579063       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:33.579210       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:33.579215       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:33.579348       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:33.579355       1 main.go:299] handling current node
	I0812 11:10:43.581118       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:43.581154       1 main.go:299] handling current node
	I0812 11:10:43.581172       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:43.581177       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:43.581328       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:43.581333       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:53.587560       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:53.587604       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:53.587757       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:53.587764       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:53.587853       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:53.587859       1 main.go:299] handling current node
	
	
	==> kindnet [e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27] <==
	I0812 11:13:32.673523       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:13:42.671666       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:13:42.671786       1 main.go:299] handling current node
	I0812 11:13:42.671875       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:13:42.671899       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:13:42.672075       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:13:42.672148       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:13:52.673026       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:13:52.673099       1 main.go:299] handling current node
	I0812 11:13:52.673116       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:13:52.673121       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:13:52.673272       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:13:52.673293       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:14:02.672186       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:14:02.672331       1 main.go:299] handling current node
	I0812 11:14:02.672404       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:14:02.672445       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:14:02.672757       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:14:02.672887       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.2.0/24] 
	I0812 11:14:12.671340       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:14:12.671500       1 main.go:299] handling current node
	I0812 11:14:12.671534       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:14:12.671598       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:14:12.671788       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:14:12.671908       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9] <==
	E0812 11:07:12.660449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60066: use of closed network connection
	E0812 11:07:12.829238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60078: use of closed network connection
	E0812 11:07:12.994686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60084: use of closed network connection
	E0812 11:07:13.157558       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60096: use of closed network connection
	E0812 11:07:13.426425       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60112: use of closed network connection
	E0812 11:07:13.620653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60130: use of closed network connection
	E0812 11:07:13.785491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60148: use of closed network connection
	E0812 11:07:13.955005       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60174: use of closed network connection
	I0812 11:10:53.753842       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0812 11:10:53.757571       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.757723       1 logging.go:59] [core] [Channel #14 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.757750       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.786938       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787018       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787059       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787110       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787179       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787229       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787315       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787371       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787427       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787466       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787518       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787569       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787606       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a] <==
	I0812 11:12:40.362901       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 11:12:40.363080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 11:12:40.364770       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 11:12:40.364928       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 11:12:40.365476       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 11:12:40.365499       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 11:12:40.366028       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 11:12:40.372705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 11:12:40.379501       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 11:12:40.379619       1 policy_source.go:224] refreshing policies
	I0812 11:12:40.383164       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 11:12:40.386697       1 aggregator.go:165] initial CRD sync complete...
	I0812 11:12:40.386765       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 11:12:40.386773       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 11:12:40.386780       1 cache.go:39] Caches are synced for autoregister controller
	E0812 11:12:40.387335       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0812 11:12:40.463558       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 11:12:41.269768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 11:12:42.496422       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 11:12:42.634062       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 11:12:42.648778       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 11:12:42.722129       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 11:12:42.729276       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 11:12:52.764873       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 11:12:52.862129       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1] <==
	I0812 11:06:45.197969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m02\" does not exist"
	I0812 11:06:45.209246       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m02" podCIDRs=["10.244.1.0/24"]
	I0812 11:06:47.462512       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-053297-m02"
	I0812 11:07:05.497206       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:07:07.820310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.121063ms"
	I0812 11:07:07.827711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.339141ms"
	I0812 11:07:07.853786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.008619ms"
	I0812 11:07:07.854018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.149µs"
	I0812 11:07:11.262097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.230967ms"
	I0812 11:07:11.262255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.943µs"
	I0812 11:07:11.880512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.446584ms"
	I0812 11:07:11.880593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.477µs"
	I0812 11:07:39.102749       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:07:39.103057       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:07:39.130558       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.2.0/24"]
	I0812 11:07:42.488578       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-053297-m03"
	I0812 11:07:59.715346       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:28.044370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:29.123102       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:08:29.123421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:29.146270       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.3.0/24"]
	I0812 11:08:48.312095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:09:32.543690       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:09:32.607124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.66078ms"
	I0812 11:09:32.607208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.523µs"
	
	
	==> kube-controller-manager [bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee] <==
	I0812 11:12:53.282353       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 11:12:53.325579       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 11:13:15.477446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.167508ms"
	I0812 11:13:15.477615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.539µs"
	I0812 11:13:15.488280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.606989ms"
	I0812 11:13:15.488623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.679µs"
	I0812 11:13:19.635650       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m02\" does not exist"
	I0812 11:13:19.647600       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m02" podCIDRs=["10.244.1.0/24"]
	I0812 11:13:21.550011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.924µs"
	I0812 11:13:21.564571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.544µs"
	I0812 11:13:21.577391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.499µs"
	I0812 11:13:21.606757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.17µs"
	I0812 11:13:21.616073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.639µs"
	I0812 11:13:21.620442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.973µs"
	I0812 11:13:23.730096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.258µs"
	I0812 11:13:39.406273       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:39.426541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.074µs"
	I0812 11:13:39.442388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.396µs"
	I0812 11:13:43.087593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.107604ms"
	I0812 11:13:43.087688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.293µs"
	I0812 11:13:57.704908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:58.738424       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:13:58.738597       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:58.748211       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.2.0/24"]
	I0812 11:14:18.539440       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	
	
	==> kube-proxy [8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229] <==
	I0812 11:05:59.258240       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:05:59.309869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0812 11:05:59.403693       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:05:59.403767       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:05:59.403834       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:05:59.414858       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:05:59.415469       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:05:59.415484       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:05:59.417542       1 config.go:192] "Starting service config controller"
	I0812 11:05:59.418406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:05:59.418637       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:05:59.418644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:05:59.420760       1 config.go:319] "Starting node config controller"
	I0812 11:05:59.420767       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:05:59.520920       1 shared_informer.go:320] Caches are synced for node config
	I0812 11:05:59.520953       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:05:59.520982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db] <==
	I0812 11:12:41.800339       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:12:41.818676       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0812 11:12:41.913502       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:12:41.913570       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:12:41.913588       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:12:41.916022       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:12:41.916220       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:12:41.916246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:12:41.918176       1 config.go:192] "Starting service config controller"
	I0812 11:12:41.918213       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:12:41.918243       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:12:41.918246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:12:41.918777       1 config.go:319] "Starting node config controller"
	I0812 11:12:41.920857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:12:42.018483       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:12:42.018570       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:12:42.021092       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35] <==
	I0812 11:12:38.791447       1 serving.go:380] Generated self-signed cert in-memory
	I0812 11:12:40.392275       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 11:12:40.392313       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:12:40.398889       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 11:12:40.398976       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0812 11:12:40.398983       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0812 11:12:40.399005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 11:12:40.404021       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 11:12:40.404078       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 11:12:40.404121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0812 11:12:40.404129       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0812 11:12:40.500146       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0812 11:12:40.505001       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 11:12:40.505101       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e] <==
	E0812 11:05:41.812969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 11:05:41.876095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:05:41.876140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:05:41.933559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:41.933662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:41.934187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 11:05:41.934248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 11:05:41.975212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:41.975254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.021230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:42.021342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.040880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 11:05:42.040924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:05:42.143350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:05:42.143447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 11:05:42.169645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:05:42.169760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:05:42.173504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:42.173626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.286583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:05:42.286846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:05:42.326651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:05:42.327192       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 11:05:45.081501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 11:10:53.765714       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 12 11:12:37 multinode-053297 kubelet[3121]: E0812 11:12:37.603677    3121 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.95:8443: connect: connection refused" node="multinode-053297"
	Aug 12 11:12:38 multinode-053297 kubelet[3121]: I0812 11:12:38.405694    3121 kubelet_node_status.go:73] "Attempting to register node" node="multinode-053297"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.425507    3121 kubelet_node_status.go:112] "Node was previously registered" node="multinode-053297"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.426058    3121 kubelet_node_status.go:76] "Successfully registered node" node="multinode-053297"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.427708    3121 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.428671    3121 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.895693    3121 apiserver.go:52] "Watching apiserver"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.899952    3121 topology_manager.go:215] "Topology Admit Handler" podUID="552ad659-4e0c-4004-8ed7-015c99592268" podNamespace="kube-system" podName="kindnet-t65tb"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.900129    3121 topology_manager.go:215] "Topology Admit Handler" podUID="4268e67b-f866-48c0-baff-19b34b4c2b0a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gs2rm"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.900900    3121 topology_manager.go:215] "Topology Admit Handler" podUID="f528af29-5853-4435-a1f4-92d071412e75" podNamespace="kube-system" podName="kube-proxy-9c48w"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.901009    3121 topology_manager.go:215] "Topology Admit Handler" podUID="87ca637d-1e99-4fbb-8b07-75b1d5100c35" podNamespace="kube-system" podName="storage-provisioner"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.901105    3121 topology_manager.go:215] "Topology Admit Handler" podUID="5bb7b665-dca0-4f7d-9582-b62b8c1a5e57" podNamespace="default" podName="busybox-fc5497c4f-242jl"
	Aug 12 11:12:40 multinode-053297 kubelet[3121]: I0812 11:12:40.996214    3121 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020364    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f528af29-5853-4435-a1f4-92d071412e75-xtables-lock\") pod \"kube-proxy-9c48w\" (UID: \"f528af29-5853-4435-a1f4-92d071412e75\") " pod="kube-system/kube-proxy-9c48w"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020464    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/552ad659-4e0c-4004-8ed7-015c99592268-cni-cfg\") pod \"kindnet-t65tb\" (UID: \"552ad659-4e0c-4004-8ed7-015c99592268\") " pod="kube-system/kindnet-t65tb"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020501    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552ad659-4e0c-4004-8ed7-015c99592268-xtables-lock\") pod \"kindnet-t65tb\" (UID: \"552ad659-4e0c-4004-8ed7-015c99592268\") " pod="kube-system/kindnet-t65tb"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020557    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552ad659-4e0c-4004-8ed7-015c99592268-lib-modules\") pod \"kindnet-t65tb\" (UID: \"552ad659-4e0c-4004-8ed7-015c99592268\") " pod="kube-system/kindnet-t65tb"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020579    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f528af29-5853-4435-a1f4-92d071412e75-lib-modules\") pod \"kube-proxy-9c48w\" (UID: \"f528af29-5853-4435-a1f4-92d071412e75\") " pod="kube-system/kube-proxy-9c48w"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020652    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/87ca637d-1e99-4fbb-8b07-75b1d5100c35-tmp\") pod \"storage-provisioner\" (UID: \"87ca637d-1e99-4fbb-8b07-75b1d5100c35\") " pod="kube-system/storage-provisioner"
	Aug 12 11:12:47 multinode-053297 kubelet[3121]: I0812 11:12:47.588854    3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 12 11:13:36 multinode-053297 kubelet[3121]: E0812 11:13:36.966859    3121 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:14:21.048705   41423 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19409-3774/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-053297 -n multinode-053297
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-053297 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 stop
E0812 11:15:45.937941   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053297 stop: exit status 82 (2m0.469004222s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053297-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-053297 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053297 status: exit status 3 (18.87555768s)

                                                
                                                
-- stdout --
	multinode-053297
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-053297-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:16:44.605162   42095 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	E0812 11:16:44.605216   42095 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-053297 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-053297 -n multinode-053297
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-053297 logs -n 25: (1.441257757s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297:/home/docker/cp-test_multinode-053297-m02_multinode-053297.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297 sudo cat                                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m02_multinode-053297.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03:/home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297-m03 sudo cat                                   | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp testdata/cp-test.txt                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297:/home/docker/cp-test_multinode-053297-m03_multinode-053297.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297 sudo cat                                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m03_multinode-053297.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt                       | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m02:/home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n                                                                 | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | multinode-053297-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-053297 ssh -n multinode-053297-m02 sudo cat                                   | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-053297 node stop m03                                                          | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	| node    | multinode-053297 node start                                                             | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC | 12 Aug 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-053297                                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC |                     |
	| stop    | -p multinode-053297                                                                     | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:08 UTC |                     |
	| start   | -p multinode-053297                                                                     | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:10 UTC | 12 Aug 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-053297                                                                | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:14 UTC |                     |
	| node    | multinode-053297 node delete                                                            | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:14 UTC | 12 Aug 24 11:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-053297 stop                                                                   | multinode-053297 | jenkins | v1.33.1 | 12 Aug 24 11:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:10:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:10:52.899612   40267 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:10:52.899925   40267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:10:52.899936   40267 out.go:304] Setting ErrFile to fd 2...
	I0812 11:10:52.899942   40267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:10:52.900173   40267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:10:52.900789   40267 out.go:298] Setting JSON to false
	I0812 11:10:52.901764   40267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3194,"bootTime":1723457859,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:10:52.901832   40267 start.go:139] virtualization: kvm guest
	I0812 11:10:52.904464   40267 out.go:177] * [multinode-053297] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:10:52.905986   40267 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:10:52.906025   40267 notify.go:220] Checking for updates...
	I0812 11:10:52.908947   40267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:10:52.910680   40267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:10:52.912390   40267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:10:52.913974   40267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:10:52.915358   40267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:10:52.917150   40267 config.go:182] Loaded profile config "multinode-053297": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:10:52.917290   40267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:10:52.917761   40267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:10:52.917832   40267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:10:52.933987   40267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0812 11:10:52.934435   40267 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:10:52.935055   40267 main.go:141] libmachine: Using API Version  1
	I0812 11:10:52.935074   40267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:10:52.935470   40267 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:10:52.935686   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:52.973076   40267 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:10:52.974377   40267 start.go:297] selected driver: kvm2
	I0812 11:10:52.974394   40267 start.go:901] validating driver "kvm2" against &{Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:10:52.974535   40267 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:10:52.974870   40267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:10:52.974932   40267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:10:52.990429   40267 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:10:52.991365   40267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:10:52.991447   40267 cni.go:84] Creating CNI manager for ""
	I0812 11:10:52.991462   40267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 11:10:52.991542   40267 start.go:340] cluster config:
	{Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:10:52.991705   40267 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:10:52.993654   40267 out.go:177] * Starting "multinode-053297" primary control-plane node in "multinode-053297" cluster
	I0812 11:10:52.994931   40267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:10:52.994984   40267 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:10:52.994994   40267 cache.go:56] Caching tarball of preloaded images
	I0812 11:10:52.995095   40267 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:10:52.995107   40267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:10:52.995237   40267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/config.json ...
	I0812 11:10:52.995449   40267 start.go:360] acquireMachinesLock for multinode-053297: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:10:52.995491   40267 start.go:364] duration metric: took 23.538µs to acquireMachinesLock for "multinode-053297"
	I0812 11:10:52.995505   40267 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:10:52.995511   40267 fix.go:54] fixHost starting: 
	I0812 11:10:52.995764   40267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:10:52.995805   40267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:10:53.010740   40267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0812 11:10:53.011200   40267 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:10:53.011697   40267 main.go:141] libmachine: Using API Version  1
	I0812 11:10:53.011721   40267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:10:53.012001   40267 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:10:53.012177   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:53.012347   40267 main.go:141] libmachine: (multinode-053297) Calling .GetState
	I0812 11:10:53.013953   40267 fix.go:112] recreateIfNeeded on multinode-053297: state=Running err=<nil>
	W0812 11:10:53.013969   40267 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:10:53.015835   40267 out.go:177] * Updating the running kvm2 "multinode-053297" VM ...
	I0812 11:10:53.017182   40267 machine.go:94] provisionDockerMachine start ...
	I0812 11:10:53.017207   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:10:53.017425   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.020642   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.021332   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.021361   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.021520   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.021690   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.021867   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.021979   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.022121   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.022363   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.022376   40267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:10:53.138165   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-053297
	
	I0812 11:10:53.138209   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.138469   40267 buildroot.go:166] provisioning hostname "multinode-053297"
	I0812 11:10:53.138489   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.138700   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.141550   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.142051   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.142082   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.142200   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.142385   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.142558   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.142706   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.142883   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.143041   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.143053   40267 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-053297 && echo "multinode-053297" | sudo tee /etc/hostname
	I0812 11:10:53.268189   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-053297
	
	I0812 11:10:53.268215   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.270867   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.271239   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.271281   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.271430   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.271611   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.271761   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.271923   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.272043   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.272218   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.272236   40267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-053297' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-053297/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-053297' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:10:53.385740   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:10:53.385777   40267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:10:53.385832   40267 buildroot.go:174] setting up certificates
	I0812 11:10:53.385911   40267 provision.go:84] configureAuth start
	I0812 11:10:53.385958   40267 main.go:141] libmachine: (multinode-053297) Calling .GetMachineName
	I0812 11:10:53.386276   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:10:53.389383   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.389859   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.389887   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.390078   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.392513   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.392856   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.392900   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.393040   40267 provision.go:143] copyHostCerts
	I0812 11:10:53.393070   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:10:53.393113   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:10:53.393122   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:10:53.393205   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:10:53.393318   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:10:53.393345   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:10:53.393355   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:10:53.393395   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:10:53.393471   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:10:53.393495   40267 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:10:53.393504   40267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:10:53.393538   40267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:10:53.393638   40267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.multinode-053297 san=[127.0.0.1 192.168.39.95 localhost minikube multinode-053297]
	I0812 11:10:53.452627   40267 provision.go:177] copyRemoteCerts
	I0812 11:10:53.452679   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:10:53.452703   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.455651   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.455975   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.456024   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.456233   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.456463   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.456633   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.456772   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:10:53.543879   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 11:10:53.543956   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:10:53.569869   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 11:10:53.569962   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0812 11:10:53.596610   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 11:10:53.596681   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 11:10:53.624262   40267 provision.go:87] duration metric: took 238.314605ms to configureAuth
	I0812 11:10:53.624293   40267 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:10:53.624531   40267 config.go:182] Loaded profile config "multinode-053297": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:10:53.624642   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:10:53.627272   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.627668   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:10:53.627698   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:10:53.627924   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:10:53.628131   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.628285   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:10:53.628448   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:10:53.628669   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:10:53.628848   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:10:53.628887   40267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:12:24.514340   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:12:24.514376   40267 machine.go:97] duration metric: took 1m31.497175287s to provisionDockerMachine
	I0812 11:12:24.514395   40267 start.go:293] postStartSetup for "multinode-053297" (driver="kvm2")
	I0812 11:12:24.514410   40267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:12:24.514432   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.514813   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:12:24.514839   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.518090   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.518486   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.518507   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.518690   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.518907   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.519111   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.519273   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.608663   40267 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:12:24.612702   40267 command_runner.go:130] > NAME=Buildroot
	I0812 11:12:24.612721   40267 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0812 11:12:24.612726   40267 command_runner.go:130] > ID=buildroot
	I0812 11:12:24.612731   40267 command_runner.go:130] > VERSION_ID=2023.02.9
	I0812 11:12:24.612742   40267 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0812 11:12:24.612770   40267 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:12:24.612785   40267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:12:24.612856   40267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:12:24.612965   40267 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:12:24.612974   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /etc/ssl/certs/109272.pem
	I0812 11:12:24.613072   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:12:24.622611   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:12:24.649506   40267 start.go:296] duration metric: took 135.095455ms for postStartSetup
	I0812 11:12:24.649555   40267 fix.go:56] duration metric: took 1m31.654043513s for fixHost
	I0812 11:12:24.649575   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.652194   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.652554   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.652586   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.652681   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.652923   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.653079   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.653232   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.653413   40267 main.go:141] libmachine: Using SSH client type: native
	I0812 11:12:24.653604   40267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0812 11:12:24.653615   40267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:12:24.777267   40267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723461144.752439142
	
	I0812 11:12:24.777296   40267 fix.go:216] guest clock: 1723461144.752439142
	I0812 11:12:24.777307   40267 fix.go:229] Guest: 2024-08-12 11:12:24.752439142 +0000 UTC Remote: 2024-08-12 11:12:24.649559793 +0000 UTC m=+91.786675546 (delta=102.879349ms)
	I0812 11:12:24.777341   40267 fix.go:200] guest clock delta is within tolerance: 102.879349ms
	I0812 11:12:24.777352   40267 start.go:83] releasing machines lock for "multinode-053297", held for 1m31.781851105s
	I0812 11:12:24.777391   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.777678   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:12:24.780377   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.780756   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.780811   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.780906   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781370   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781634   40267 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:12:24.781764   40267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:12:24.781801   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.781875   40267 ssh_runner.go:195] Run: cat /version.json
	I0812 11:12:24.781899   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:12:24.784501   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.784906   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.784946   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.784971   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.785120   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.785310   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.785486   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:24.785495   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.785511   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:24.785692   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:12:24.785700   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.785859   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:12:24.786010   40267 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:12:24.786150   40267 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:12:24.905838   40267 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0812 11:12:24.906575   40267 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0812 11:12:24.906760   40267 ssh_runner.go:195] Run: systemctl --version
	I0812 11:12:24.913393   40267 command_runner.go:130] > systemd 252 (252)
	I0812 11:12:24.913450   40267 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0812 11:12:24.913516   40267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:12:25.073010   40267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0812 11:12:25.079558   40267 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0812 11:12:25.079922   40267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:12:25.079998   40267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:12:25.089520   40267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 11:12:25.089560   40267 start.go:495] detecting cgroup driver to use...
	I0812 11:12:25.089633   40267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:12:25.105924   40267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:12:25.120369   40267 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:12:25.120422   40267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:12:25.134261   40267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:12:25.148211   40267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:12:25.295586   40267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:12:25.438381   40267 docker.go:233] disabling docker service ...
	I0812 11:12:25.438447   40267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:12:25.456167   40267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:12:25.470969   40267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:12:25.616134   40267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:12:25.773426   40267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:12:25.786869   40267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:12:25.806169   40267 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0812 11:12:25.806485   40267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:12:25.806547   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.817087   40267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:12:25.817178   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.827781   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.837924   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.848766   40267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:12:25.859737   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.870508   40267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.881028   40267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:12:25.891120   40267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:12:25.900288   40267 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0812 11:12:25.900497   40267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:12:25.909652   40267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:12:26.052003   40267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:12:34.161606   40267 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.109562649s)
	I0812 11:12:34.161642   40267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:12:34.161702   40267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:12:34.166323   40267 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0812 11:12:34.166354   40267 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0812 11:12:34.166374   40267 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0812 11:12:34.166381   40267 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 11:12:34.166386   40267 command_runner.go:130] > Access: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166403   40267 command_runner.go:130] > Modify: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166413   40267 command_runner.go:130] > Change: 2024-08-12 11:12:34.019779954 +0000
	I0812 11:12:34.166418   40267 command_runner.go:130] >  Birth: -
	I0812 11:12:34.166551   40267 start.go:563] Will wait 60s for crictl version
	I0812 11:12:34.166631   40267 ssh_runner.go:195] Run: which crictl
	I0812 11:12:34.170610   40267 command_runner.go:130] > /usr/bin/crictl
	I0812 11:12:34.170685   40267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:12:34.205615   40267 command_runner.go:130] > Version:  0.1.0
	I0812 11:12:34.205650   40267 command_runner.go:130] > RuntimeName:  cri-o
	I0812 11:12:34.205658   40267 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0812 11:12:34.205665   40267 command_runner.go:130] > RuntimeApiVersion:  v1
	I0812 11:12:34.205689   40267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:12:34.205771   40267 ssh_runner.go:195] Run: crio --version
	I0812 11:12:34.234794   40267 command_runner.go:130] > crio version 1.29.1
	I0812 11:12:34.234823   40267 command_runner.go:130] > Version:        1.29.1
	I0812 11:12:34.234830   40267 command_runner.go:130] > GitCommit:      unknown
	I0812 11:12:34.234835   40267 command_runner.go:130] > GitCommitDate:  unknown
	I0812 11:12:34.234856   40267 command_runner.go:130] > GitTreeState:   clean
	I0812 11:12:34.234864   40267 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 11:12:34.234869   40267 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 11:12:34.234875   40267 command_runner.go:130] > Compiler:       gc
	I0812 11:12:34.234881   40267 command_runner.go:130] > Platform:       linux/amd64
	I0812 11:12:34.234887   40267 command_runner.go:130] > Linkmode:       dynamic
	I0812 11:12:34.234893   40267 command_runner.go:130] > BuildTags:      
	I0812 11:12:34.234900   40267 command_runner.go:130] >   containers_image_ostree_stub
	I0812 11:12:34.234907   40267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 11:12:34.234917   40267 command_runner.go:130] >   btrfs_noversion
	I0812 11:12:34.234924   40267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 11:12:34.234931   40267 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 11:12:34.234937   40267 command_runner.go:130] >   seccomp
	I0812 11:12:34.234945   40267 command_runner.go:130] > LDFlags:          unknown
	I0812 11:12:34.234952   40267 command_runner.go:130] > SeccompEnabled:   true
	I0812 11:12:34.234959   40267 command_runner.go:130] > AppArmorEnabled:  false
	I0812 11:12:34.235041   40267 ssh_runner.go:195] Run: crio --version
	I0812 11:12:34.262741   40267 command_runner.go:130] > crio version 1.29.1
	I0812 11:12:34.262767   40267 command_runner.go:130] > Version:        1.29.1
	I0812 11:12:34.262775   40267 command_runner.go:130] > GitCommit:      unknown
	I0812 11:12:34.262781   40267 command_runner.go:130] > GitCommitDate:  unknown
	I0812 11:12:34.262787   40267 command_runner.go:130] > GitTreeState:   clean
	I0812 11:12:34.262794   40267 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 11:12:34.262799   40267 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 11:12:34.262805   40267 command_runner.go:130] > Compiler:       gc
	I0812 11:12:34.262812   40267 command_runner.go:130] > Platform:       linux/amd64
	I0812 11:12:34.262818   40267 command_runner.go:130] > Linkmode:       dynamic
	I0812 11:12:34.262824   40267 command_runner.go:130] > BuildTags:      
	I0812 11:12:34.262831   40267 command_runner.go:130] >   containers_image_ostree_stub
	I0812 11:12:34.262839   40267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 11:12:34.262853   40267 command_runner.go:130] >   btrfs_noversion
	I0812 11:12:34.262860   40267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 11:12:34.262871   40267 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 11:12:34.262877   40267 command_runner.go:130] >   seccomp
	I0812 11:12:34.262885   40267 command_runner.go:130] > LDFlags:          unknown
	I0812 11:12:34.262894   40267 command_runner.go:130] > SeccompEnabled:   true
	I0812 11:12:34.262901   40267 command_runner.go:130] > AppArmorEnabled:  false
	I0812 11:12:34.266184   40267 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:12:34.267395   40267 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:12:34.270166   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:34.270496   40267 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:12:34.270527   40267 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:12:34.270733   40267 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 11:12:34.274852   40267 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0812 11:12:34.274974   40267 kubeadm.go:883] updating cluster {Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:12:34.275127   40267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:12:34.275180   40267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:12:34.318681   40267 command_runner.go:130] > {
	I0812 11:12:34.318704   40267 command_runner.go:130] >   "images": [
	I0812 11:12:34.318708   40267 command_runner.go:130] >     {
	I0812 11:12:34.318715   40267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 11:12:34.318720   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318725   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 11:12:34.318729   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318733   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318743   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 11:12:34.318750   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 11:12:34.318753   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318766   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.318771   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318774   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318782   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318789   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318793   40267 command_runner.go:130] >     },
	I0812 11:12:34.318796   40267 command_runner.go:130] >     {
	I0812 11:12:34.318801   40267 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 11:12:34.318805   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318810   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 11:12:34.318814   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318818   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318825   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 11:12:34.318833   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 11:12:34.318836   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318840   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.318844   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318852   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318858   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318862   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318865   40267 command_runner.go:130] >     },
	I0812 11:12:34.318868   40267 command_runner.go:130] >     {
	I0812 11:12:34.318873   40267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 11:12:34.318877   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318882   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 11:12:34.318887   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318891   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318898   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 11:12:34.318905   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 11:12:34.318909   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318913   40267 command_runner.go:130] >       "size": "1363676",
	I0812 11:12:34.318919   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.318923   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.318926   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.318931   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.318937   40267 command_runner.go:130] >     },
	I0812 11:12:34.318945   40267 command_runner.go:130] >     {
	I0812 11:12:34.318951   40267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 11:12:34.318957   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.318962   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 11:12:34.318968   40267 command_runner.go:130] >       ],
	I0812 11:12:34.318972   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.318979   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 11:12:34.318995   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 11:12:34.319000   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319005   40267 command_runner.go:130] >       "size": "31470524",
	I0812 11:12:34.319009   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319012   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319016   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319020   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319023   40267 command_runner.go:130] >     },
	I0812 11:12:34.319026   40267 command_runner.go:130] >     {
	I0812 11:12:34.319032   40267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 11:12:34.319038   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319043   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 11:12:34.319048   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319052   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319059   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 11:12:34.319068   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 11:12:34.319071   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319075   40267 command_runner.go:130] >       "size": "61245718",
	I0812 11:12:34.319079   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319083   40267 command_runner.go:130] >       "username": "nonroot",
	I0812 11:12:34.319087   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319090   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319094   40267 command_runner.go:130] >     },
	I0812 11:12:34.319097   40267 command_runner.go:130] >     {
	I0812 11:12:34.319105   40267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 11:12:34.319109   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319114   40267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 11:12:34.319120   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319123   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319134   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 11:12:34.319143   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 11:12:34.319147   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319150   40267 command_runner.go:130] >       "size": "150779692",
	I0812 11:12:34.319154   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319158   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319161   40267 command_runner.go:130] >       },
	I0812 11:12:34.319165   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319169   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319172   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319176   40267 command_runner.go:130] >     },
	I0812 11:12:34.319179   40267 command_runner.go:130] >     {
	I0812 11:12:34.319185   40267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 11:12:34.319189   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319194   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 11:12:34.319199   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319203   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319210   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 11:12:34.319219   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 11:12:34.319222   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319226   40267 command_runner.go:130] >       "size": "117609954",
	I0812 11:12:34.319230   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319234   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319237   40267 command_runner.go:130] >       },
	I0812 11:12:34.319241   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319244   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319250   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319253   40267 command_runner.go:130] >     },
	I0812 11:12:34.319256   40267 command_runner.go:130] >     {
	I0812 11:12:34.319262   40267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 11:12:34.319267   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319272   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 11:12:34.319275   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319279   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319299   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 11:12:34.319309   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 11:12:34.319320   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319347   40267 command_runner.go:130] >       "size": "112198984",
	I0812 11:12:34.319354   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319357   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319361   40267 command_runner.go:130] >       },
	I0812 11:12:34.319364   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319368   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319371   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319374   40267 command_runner.go:130] >     },
	I0812 11:12:34.319377   40267 command_runner.go:130] >     {
	I0812 11:12:34.319382   40267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 11:12:34.319386   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319391   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 11:12:34.319394   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319397   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319404   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 11:12:34.319410   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 11:12:34.319413   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319420   40267 command_runner.go:130] >       "size": "85953945",
	I0812 11:12:34.319424   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.319428   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319431   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319435   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319438   40267 command_runner.go:130] >     },
	I0812 11:12:34.319442   40267 command_runner.go:130] >     {
	I0812 11:12:34.319448   40267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 11:12:34.319454   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319458   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 11:12:34.319464   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319468   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319475   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 11:12:34.319484   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 11:12:34.319487   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319492   40267 command_runner.go:130] >       "size": "63051080",
	I0812 11:12:34.319498   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319509   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.319519   40267 command_runner.go:130] >       },
	I0812 11:12:34.319523   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319527   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319531   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.319534   40267 command_runner.go:130] >     },
	I0812 11:12:34.319537   40267 command_runner.go:130] >     {
	I0812 11:12:34.319543   40267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 11:12:34.319549   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.319554   40267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 11:12:34.319559   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319562   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.319580   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 11:12:34.319586   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 11:12:34.319591   40267 command_runner.go:130] >       ],
	I0812 11:12:34.319595   40267 command_runner.go:130] >       "size": "750414",
	I0812 11:12:34.319599   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.319603   40267 command_runner.go:130] >         "value": "65535"
	I0812 11:12:34.319606   40267 command_runner.go:130] >       },
	I0812 11:12:34.319610   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.319614   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.319618   40267 command_runner.go:130] >       "pinned": true
	I0812 11:12:34.319620   40267 command_runner.go:130] >     }
	I0812 11:12:34.319623   40267 command_runner.go:130] >   ]
	I0812 11:12:34.319626   40267 command_runner.go:130] > }
	I0812 11:12:34.320595   40267 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:12:34.320618   40267 crio.go:433] Images already preloaded, skipping extraction
	I0812 11:12:34.320686   40267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:12:34.353463   40267 command_runner.go:130] > {
	I0812 11:12:34.353483   40267 command_runner.go:130] >   "images": [
	I0812 11:12:34.353486   40267 command_runner.go:130] >     {
	I0812 11:12:34.353495   40267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 11:12:34.353503   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353513   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 11:12:34.353518   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353524   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353535   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 11:12:34.353545   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 11:12:34.353555   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353562   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.353579   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353587   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353594   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353602   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353608   40267 command_runner.go:130] >     },
	I0812 11:12:34.353617   40267 command_runner.go:130] >     {
	I0812 11:12:34.353627   40267 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 11:12:34.353636   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353645   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 11:12:34.353652   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353657   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353669   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 11:12:34.353684   40267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 11:12:34.353692   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353698   40267 command_runner.go:130] >       "size": "87165492",
	I0812 11:12:34.353706   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353718   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353728   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353737   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353746   40267 command_runner.go:130] >     },
	I0812 11:12:34.353752   40267 command_runner.go:130] >     {
	I0812 11:12:34.353764   40267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 11:12:34.353773   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353781   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 11:12:34.353789   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353796   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353809   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 11:12:34.353820   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 11:12:34.353827   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353831   40267 command_runner.go:130] >       "size": "1363676",
	I0812 11:12:34.353837   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353841   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353845   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353852   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353855   40267 command_runner.go:130] >     },
	I0812 11:12:34.353860   40267 command_runner.go:130] >     {
	I0812 11:12:34.353871   40267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 11:12:34.353878   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353883   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 11:12:34.353887   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353891   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353899   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 11:12:34.353915   40267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 11:12:34.353921   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353925   40267 command_runner.go:130] >       "size": "31470524",
	I0812 11:12:34.353929   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.353933   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.353937   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.353941   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.353944   40267 command_runner.go:130] >     },
	I0812 11:12:34.353948   40267 command_runner.go:130] >     {
	I0812 11:12:34.353953   40267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 11:12:34.353959   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.353964   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 11:12:34.353968   40267 command_runner.go:130] >       ],
	I0812 11:12:34.353971   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.353978   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 11:12:34.353987   40267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 11:12:34.353996   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354002   40267 command_runner.go:130] >       "size": "61245718",
	I0812 11:12:34.354006   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.354010   40267 command_runner.go:130] >       "username": "nonroot",
	I0812 11:12:34.354014   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354018   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354021   40267 command_runner.go:130] >     },
	I0812 11:12:34.354024   40267 command_runner.go:130] >     {
	I0812 11:12:34.354032   40267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 11:12:34.354036   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354041   40267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 11:12:34.354046   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354050   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354057   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 11:12:34.354070   40267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 11:12:34.354077   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354081   40267 command_runner.go:130] >       "size": "150779692",
	I0812 11:12:34.354086   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354090   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354094   40267 command_runner.go:130] >       },
	I0812 11:12:34.354101   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354107   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354111   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354115   40267 command_runner.go:130] >     },
	I0812 11:12:34.354118   40267 command_runner.go:130] >     {
	I0812 11:12:34.354124   40267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 11:12:34.354129   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354134   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 11:12:34.354139   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354143   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354152   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 11:12:34.354162   40267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 11:12:34.354165   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354169   40267 command_runner.go:130] >       "size": "117609954",
	I0812 11:12:34.354175   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354179   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354182   40267 command_runner.go:130] >       },
	I0812 11:12:34.354191   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354197   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354201   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354205   40267 command_runner.go:130] >     },
	I0812 11:12:34.354208   40267 command_runner.go:130] >     {
	I0812 11:12:34.354214   40267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 11:12:34.354218   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354223   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 11:12:34.354229   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354233   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354253   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 11:12:34.354264   40267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 11:12:34.354267   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354275   40267 command_runner.go:130] >       "size": "112198984",
	I0812 11:12:34.354279   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354283   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354287   40267 command_runner.go:130] >       },
	I0812 11:12:34.354290   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354294   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354298   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354302   40267 command_runner.go:130] >     },
	I0812 11:12:34.354305   40267 command_runner.go:130] >     {
	I0812 11:12:34.354313   40267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 11:12:34.354317   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354323   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 11:12:34.354327   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354333   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354339   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 11:12:34.354360   40267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 11:12:34.354365   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354369   40267 command_runner.go:130] >       "size": "85953945",
	I0812 11:12:34.354373   40267 command_runner.go:130] >       "uid": null,
	I0812 11:12:34.354377   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354383   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354388   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354393   40267 command_runner.go:130] >     },
	I0812 11:12:34.354396   40267 command_runner.go:130] >     {
	I0812 11:12:34.354402   40267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 11:12:34.354409   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354413   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 11:12:34.354419   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354422   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354429   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 11:12:34.354438   40267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 11:12:34.354442   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354445   40267 command_runner.go:130] >       "size": "63051080",
	I0812 11:12:34.354449   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354453   40267 command_runner.go:130] >         "value": "0"
	I0812 11:12:34.354456   40267 command_runner.go:130] >       },
	I0812 11:12:34.354464   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354470   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354474   40267 command_runner.go:130] >       "pinned": false
	I0812 11:12:34.354478   40267 command_runner.go:130] >     },
	I0812 11:12:34.354481   40267 command_runner.go:130] >     {
	I0812 11:12:34.354487   40267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 11:12:34.354493   40267 command_runner.go:130] >       "repoTags": [
	I0812 11:12:34.354498   40267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 11:12:34.354503   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354507   40267 command_runner.go:130] >       "repoDigests": [
	I0812 11:12:34.354513   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 11:12:34.354522   40267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 11:12:34.354526   40267 command_runner.go:130] >       ],
	I0812 11:12:34.354530   40267 command_runner.go:130] >       "size": "750414",
	I0812 11:12:34.354534   40267 command_runner.go:130] >       "uid": {
	I0812 11:12:34.354538   40267 command_runner.go:130] >         "value": "65535"
	I0812 11:12:34.354541   40267 command_runner.go:130] >       },
	I0812 11:12:34.354545   40267 command_runner.go:130] >       "username": "",
	I0812 11:12:34.354549   40267 command_runner.go:130] >       "spec": null,
	I0812 11:12:34.354553   40267 command_runner.go:130] >       "pinned": true
	I0812 11:12:34.354558   40267 command_runner.go:130] >     }
	I0812 11:12:34.354561   40267 command_runner.go:130] >   ]
	I0812 11:12:34.354564   40267 command_runner.go:130] > }
	I0812 11:12:34.354703   40267 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:12:34.354719   40267 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:12:34.354728   40267 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.3 crio true true} ...
	I0812 11:12:34.354853   40267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-053297 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:12:34.354925   40267 ssh_runner.go:195] Run: crio config
	I0812 11:12:34.386635   40267 command_runner.go:130] ! time="2024-08-12 11:12:34.361348603Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0812 11:12:34.393060   40267 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0812 11:12:34.398422   40267 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0812 11:12:34.398446   40267 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0812 11:12:34.398455   40267 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0812 11:12:34.398459   40267 command_runner.go:130] > #
	I0812 11:12:34.398468   40267 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0812 11:12:34.398478   40267 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0812 11:12:34.398487   40267 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0812 11:12:34.398502   40267 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0812 11:12:34.398511   40267 command_runner.go:130] > # reload'.
	I0812 11:12:34.398522   40267 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0812 11:12:34.398534   40267 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0812 11:12:34.398557   40267 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0812 11:12:34.398571   40267 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0812 11:12:34.398579   40267 command_runner.go:130] > [crio]
	I0812 11:12:34.398590   40267 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0812 11:12:34.398600   40267 command_runner.go:130] > # containers images, in this directory.
	I0812 11:12:34.398607   40267 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0812 11:12:34.398625   40267 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0812 11:12:34.398636   40267 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0812 11:12:34.398650   40267 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0812 11:12:34.398660   40267 command_runner.go:130] > # imagestore = ""
	I0812 11:12:34.398669   40267 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0812 11:12:34.398684   40267 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0812 11:12:34.398694   40267 command_runner.go:130] > storage_driver = "overlay"
	I0812 11:12:34.398702   40267 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0812 11:12:34.398711   40267 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0812 11:12:34.398721   40267 command_runner.go:130] > storage_option = [
	I0812 11:12:34.398729   40267 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0812 11:12:34.398737   40267 command_runner.go:130] > ]
	I0812 11:12:34.398748   40267 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0812 11:12:34.398761   40267 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0812 11:12:34.398771   40267 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0812 11:12:34.398784   40267 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0812 11:12:34.398795   40267 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0812 11:12:34.398805   40267 command_runner.go:130] > # always happen on a node reboot
	I0812 11:12:34.398815   40267 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0812 11:12:34.398835   40267 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0812 11:12:34.398848   40267 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0812 11:12:34.398859   40267 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0812 11:12:34.398870   40267 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0812 11:12:34.398882   40267 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0812 11:12:34.398898   40267 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0812 11:12:34.398907   40267 command_runner.go:130] > # internal_wipe = true
	I0812 11:12:34.398921   40267 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0812 11:12:34.398933   40267 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0812 11:12:34.398943   40267 command_runner.go:130] > # internal_repair = false
	I0812 11:12:34.398954   40267 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0812 11:12:34.398971   40267 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0812 11:12:34.398983   40267 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0812 11:12:34.398996   40267 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0812 11:12:34.399009   40267 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0812 11:12:34.399017   40267 command_runner.go:130] > [crio.api]
	I0812 11:12:34.399026   40267 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0812 11:12:34.399036   40267 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0812 11:12:34.399045   40267 command_runner.go:130] > # IP address on which the stream server will listen.
	I0812 11:12:34.399056   40267 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0812 11:12:34.399069   40267 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0812 11:12:34.399084   40267 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0812 11:12:34.399094   40267 command_runner.go:130] > # stream_port = "0"
	I0812 11:12:34.399104   40267 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0812 11:12:34.399114   40267 command_runner.go:130] > # stream_enable_tls = false
	I0812 11:12:34.399127   40267 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0812 11:12:34.399137   40267 command_runner.go:130] > # stream_idle_timeout = ""
	I0812 11:12:34.399148   40267 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0812 11:12:34.399160   40267 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0812 11:12:34.399168   40267 command_runner.go:130] > # minutes.
	I0812 11:12:34.399177   40267 command_runner.go:130] > # stream_tls_cert = ""
	I0812 11:12:34.399190   40267 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0812 11:12:34.399202   40267 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0812 11:12:34.399212   40267 command_runner.go:130] > # stream_tls_key = ""
	I0812 11:12:34.399225   40267 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0812 11:12:34.399237   40267 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0812 11:12:34.399265   40267 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0812 11:12:34.399275   40267 command_runner.go:130] > # stream_tls_ca = ""
	I0812 11:12:34.399289   40267 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 11:12:34.399299   40267 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0812 11:12:34.399313   40267 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 11:12:34.399323   40267 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0812 11:12:34.399333   40267 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0812 11:12:34.399351   40267 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0812 11:12:34.399359   40267 command_runner.go:130] > [crio.runtime]
	I0812 11:12:34.399375   40267 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0812 11:12:34.399388   40267 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0812 11:12:34.399404   40267 command_runner.go:130] > # "nofile=1024:2048"
	I0812 11:12:34.399417   40267 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0812 11:12:34.399424   40267 command_runner.go:130] > # default_ulimits = [
	I0812 11:12:34.399433   40267 command_runner.go:130] > # ]
	I0812 11:12:34.399444   40267 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0812 11:12:34.399452   40267 command_runner.go:130] > # no_pivot = false
	I0812 11:12:34.399463   40267 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0812 11:12:34.399476   40267 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0812 11:12:34.399488   40267 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0812 11:12:34.399500   40267 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0812 11:12:34.399509   40267 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0812 11:12:34.399522   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 11:12:34.399533   40267 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0812 11:12:34.399542   40267 command_runner.go:130] > # Cgroup setting for conmon
	I0812 11:12:34.399554   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0812 11:12:34.399563   40267 command_runner.go:130] > conmon_cgroup = "pod"
	I0812 11:12:34.399576   40267 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0812 11:12:34.399595   40267 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0812 11:12:34.399609   40267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 11:12:34.399618   40267 command_runner.go:130] > conmon_env = [
	I0812 11:12:34.399630   40267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 11:12:34.399638   40267 command_runner.go:130] > ]
	I0812 11:12:34.399648   40267 command_runner.go:130] > # Additional environment variables to set for all the
	I0812 11:12:34.399659   40267 command_runner.go:130] > # containers. These are overridden if set in the
	I0812 11:12:34.399671   40267 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0812 11:12:34.399680   40267 command_runner.go:130] > # default_env = [
	I0812 11:12:34.399686   40267 command_runner.go:130] > # ]
	I0812 11:12:34.399697   40267 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0812 11:12:34.399712   40267 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0812 11:12:34.399722   40267 command_runner.go:130] > # selinux = false
	I0812 11:12:34.399734   40267 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0812 11:12:34.399747   40267 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0812 11:12:34.399758   40267 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0812 11:12:34.399766   40267 command_runner.go:130] > # seccomp_profile = ""
	I0812 11:12:34.399779   40267 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0812 11:12:34.399791   40267 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0812 11:12:34.399811   40267 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0812 11:12:34.399827   40267 command_runner.go:130] > # which might increase security.
	I0812 11:12:34.399839   40267 command_runner.go:130] > # This option is currently deprecated,
	I0812 11:12:34.399850   40267 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0812 11:12:34.399861   40267 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0812 11:12:34.399874   40267 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0812 11:12:34.399888   40267 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0812 11:12:34.399901   40267 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0812 11:12:34.399913   40267 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0812 11:12:34.399923   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.399931   40267 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0812 11:12:34.399944   40267 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0812 11:12:34.399954   40267 command_runner.go:130] > # the cgroup blockio controller.
	I0812 11:12:34.399962   40267 command_runner.go:130] > # blockio_config_file = ""
	I0812 11:12:34.399976   40267 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0812 11:12:34.399986   40267 command_runner.go:130] > # blockio parameters.
	I0812 11:12:34.399995   40267 command_runner.go:130] > # blockio_reload = false
	I0812 11:12:34.400006   40267 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0812 11:12:34.400028   40267 command_runner.go:130] > # irqbalance daemon.
	I0812 11:12:34.400040   40267 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0812 11:12:34.400053   40267 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0812 11:12:34.400067   40267 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0812 11:12:34.400081   40267 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0812 11:12:34.400094   40267 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0812 11:12:34.400106   40267 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0812 11:12:34.400116   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.400125   40267 command_runner.go:130] > # rdt_config_file = ""
	I0812 11:12:34.400136   40267 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0812 11:12:34.400146   40267 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0812 11:12:34.400188   40267 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0812 11:12:34.400201   40267 command_runner.go:130] > # separate_pull_cgroup = ""
	I0812 11:12:34.400212   40267 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0812 11:12:34.400225   40267 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0812 11:12:34.400235   40267 command_runner.go:130] > # will be added.
	I0812 11:12:34.400244   40267 command_runner.go:130] > # default_capabilities = [
	I0812 11:12:34.400251   40267 command_runner.go:130] > # 	"CHOWN",
	I0812 11:12:34.400268   40267 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0812 11:12:34.400278   40267 command_runner.go:130] > # 	"FSETID",
	I0812 11:12:34.400285   40267 command_runner.go:130] > # 	"FOWNER",
	I0812 11:12:34.400292   40267 command_runner.go:130] > # 	"SETGID",
	I0812 11:12:34.400301   40267 command_runner.go:130] > # 	"SETUID",
	I0812 11:12:34.400308   40267 command_runner.go:130] > # 	"SETPCAP",
	I0812 11:12:34.400318   40267 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0812 11:12:34.400326   40267 command_runner.go:130] > # 	"KILL",
	I0812 11:12:34.400334   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400351   40267 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0812 11:12:34.400362   40267 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0812 11:12:34.400371   40267 command_runner.go:130] > # add_inheritable_capabilities = false
	I0812 11:12:34.400384   40267 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0812 11:12:34.400397   40267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 11:12:34.400407   40267 command_runner.go:130] > default_sysctls = [
	I0812 11:12:34.400417   40267 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0812 11:12:34.400424   40267 command_runner.go:130] > ]
	I0812 11:12:34.400433   40267 command_runner.go:130] > # List of devices on the host that a
	I0812 11:12:34.400446   40267 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0812 11:12:34.400455   40267 command_runner.go:130] > # allowed_devices = [
	I0812 11:12:34.400462   40267 command_runner.go:130] > # 	"/dev/fuse",
	I0812 11:12:34.400470   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400478   40267 command_runner.go:130] > # List of additional devices. specified as
	I0812 11:12:34.400493   40267 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0812 11:12:34.400505   40267 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0812 11:12:34.400517   40267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 11:12:34.400526   40267 command_runner.go:130] > # additional_devices = [
	I0812 11:12:34.400532   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400542   40267 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0812 11:12:34.400552   40267 command_runner.go:130] > # cdi_spec_dirs = [
	I0812 11:12:34.400560   40267 command_runner.go:130] > # 	"/etc/cdi",
	I0812 11:12:34.400568   40267 command_runner.go:130] > # 	"/var/run/cdi",
	I0812 11:12:34.400574   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400587   40267 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0812 11:12:34.400600   40267 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0812 11:12:34.400608   40267 command_runner.go:130] > # Defaults to false.
	I0812 11:12:34.400626   40267 command_runner.go:130] > # device_ownership_from_security_context = false
	I0812 11:12:34.400640   40267 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0812 11:12:34.400652   40267 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0812 11:12:34.400662   40267 command_runner.go:130] > # hooks_dir = [
	I0812 11:12:34.400671   40267 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0812 11:12:34.400679   40267 command_runner.go:130] > # ]
	I0812 11:12:34.400689   40267 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0812 11:12:34.400702   40267 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0812 11:12:34.400711   40267 command_runner.go:130] > # its default mounts from the following two files:
	I0812 11:12:34.400719   40267 command_runner.go:130] > #
	I0812 11:12:34.400730   40267 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0812 11:12:34.400743   40267 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0812 11:12:34.400756   40267 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0812 11:12:34.400763   40267 command_runner.go:130] > #
	I0812 11:12:34.400781   40267 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0812 11:12:34.400794   40267 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0812 11:12:34.400805   40267 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0812 11:12:34.400816   40267 command_runner.go:130] > #      only add mounts it finds in this file.
	I0812 11:12:34.400821   40267 command_runner.go:130] > #
	I0812 11:12:34.400828   40267 command_runner.go:130] > # default_mounts_file = ""
	I0812 11:12:34.400838   40267 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0812 11:12:34.400852   40267 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0812 11:12:34.400862   40267 command_runner.go:130] > pids_limit = 1024
	I0812 11:12:34.400879   40267 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0812 11:12:34.400892   40267 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0812 11:12:34.400906   40267 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0812 11:12:34.400922   40267 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0812 11:12:34.400931   40267 command_runner.go:130] > # log_size_max = -1
	I0812 11:12:34.400943   40267 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0812 11:12:34.400952   40267 command_runner.go:130] > # log_to_journald = false
	I0812 11:12:34.400963   40267 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0812 11:12:34.400974   40267 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0812 11:12:34.400984   40267 command_runner.go:130] > # Path to directory for container attach sockets.
	I0812 11:12:34.400994   40267 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0812 11:12:34.401007   40267 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0812 11:12:34.401016   40267 command_runner.go:130] > # bind_mount_prefix = ""
	I0812 11:12:34.401035   40267 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0812 11:12:34.401044   40267 command_runner.go:130] > # read_only = false
	I0812 11:12:34.401055   40267 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0812 11:12:34.401068   40267 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0812 11:12:34.401078   40267 command_runner.go:130] > # live configuration reload.
	I0812 11:12:34.401088   40267 command_runner.go:130] > # log_level = "info"
	I0812 11:12:34.401098   40267 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0812 11:12:34.401109   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.401116   40267 command_runner.go:130] > # log_filter = ""
	I0812 11:12:34.401129   40267 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0812 11:12:34.401143   40267 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0812 11:12:34.401152   40267 command_runner.go:130] > # separated by comma.
	I0812 11:12:34.401165   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401175   40267 command_runner.go:130] > # uid_mappings = ""
	I0812 11:12:34.401188   40267 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0812 11:12:34.401201   40267 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0812 11:12:34.401208   40267 command_runner.go:130] > # separated by comma.
	I0812 11:12:34.401221   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401230   40267 command_runner.go:130] > # gid_mappings = ""
	I0812 11:12:34.401240   40267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0812 11:12:34.401253   40267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 11:12:34.401265   40267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 11:12:34.401281   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401290   40267 command_runner.go:130] > # minimum_mappable_uid = -1
	I0812 11:12:34.401301   40267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0812 11:12:34.401318   40267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 11:12:34.401331   40267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 11:12:34.401350   40267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 11:12:34.401361   40267 command_runner.go:130] > # minimum_mappable_gid = -1
	I0812 11:12:34.401372   40267 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0812 11:12:34.401384   40267 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0812 11:12:34.401396   40267 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0812 11:12:34.401406   40267 command_runner.go:130] > # ctr_stop_timeout = 30
	I0812 11:12:34.401419   40267 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0812 11:12:34.401431   40267 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0812 11:12:34.401443   40267 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0812 11:12:34.401459   40267 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0812 11:12:34.401469   40267 command_runner.go:130] > drop_infra_ctr = false
	I0812 11:12:34.401480   40267 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0812 11:12:34.401492   40267 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0812 11:12:34.401506   40267 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0812 11:12:34.401516   40267 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0812 11:12:34.401529   40267 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0812 11:12:34.401541   40267 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0812 11:12:34.401551   40267 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0812 11:12:34.401563   40267 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0812 11:12:34.401572   40267 command_runner.go:130] > # shared_cpuset = ""
	I0812 11:12:34.401583   40267 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0812 11:12:34.401594   40267 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0812 11:12:34.401605   40267 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0812 11:12:34.401620   40267 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0812 11:12:34.401629   40267 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0812 11:12:34.401637   40267 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0812 11:12:34.401651   40267 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0812 11:12:34.401661   40267 command_runner.go:130] > # enable_criu_support = false
	I0812 11:12:34.401673   40267 command_runner.go:130] > # Enable/disable the generation of the container,
	I0812 11:12:34.401683   40267 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0812 11:12:34.401693   40267 command_runner.go:130] > # enable_pod_events = false
	I0812 11:12:34.401705   40267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 11:12:34.401718   40267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 11:12:34.401728   40267 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0812 11:12:34.401736   40267 command_runner.go:130] > # default_runtime = "runc"
	I0812 11:12:34.401747   40267 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0812 11:12:34.401762   40267 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0812 11:12:34.401779   40267 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0812 11:12:34.401791   40267 command_runner.go:130] > # creation as a file is not desired either.
	I0812 11:12:34.401808   40267 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0812 11:12:34.401819   40267 command_runner.go:130] > # the hostname is being managed dynamically.
	I0812 11:12:34.401827   40267 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0812 11:12:34.401835   40267 command_runner.go:130] > # ]
	I0812 11:12:34.401846   40267 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0812 11:12:34.401860   40267 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0812 11:12:34.401881   40267 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0812 11:12:34.401893   40267 command_runner.go:130] > # Each entry in the table should follow the format:
	I0812 11:12:34.401901   40267 command_runner.go:130] > #
	I0812 11:12:34.401908   40267 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0812 11:12:34.401918   40267 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0812 11:12:34.401975   40267 command_runner.go:130] > # runtime_type = "oci"
	I0812 11:12:34.401985   40267 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0812 11:12:34.401993   40267 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0812 11:12:34.401999   40267 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0812 11:12:34.402006   40267 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0812 11:12:34.402015   40267 command_runner.go:130] > # monitor_env = []
	I0812 11:12:34.402024   40267 command_runner.go:130] > # privileged_without_host_devices = false
	I0812 11:12:34.402034   40267 command_runner.go:130] > # allowed_annotations = []
	I0812 11:12:34.402045   40267 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0812 11:12:34.402054   40267 command_runner.go:130] > # Where:
	I0812 11:12:34.402064   40267 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0812 11:12:34.402076   40267 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0812 11:12:34.402088   40267 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0812 11:12:34.402100   40267 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0812 11:12:34.402108   40267 command_runner.go:130] > #   in $PATH.
	I0812 11:12:34.402119   40267 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0812 11:12:34.402129   40267 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0812 11:12:34.402140   40267 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0812 11:12:34.402149   40267 command_runner.go:130] > #   state.
	I0812 11:12:34.402159   40267 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0812 11:12:34.402172   40267 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0812 11:12:34.402183   40267 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0812 11:12:34.402195   40267 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0812 11:12:34.402207   40267 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0812 11:12:34.402221   40267 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0812 11:12:34.402232   40267 command_runner.go:130] > #   The currently recognized values are:
	I0812 11:12:34.402247   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0812 11:12:34.402261   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0812 11:12:34.402274   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0812 11:12:34.402287   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0812 11:12:34.402301   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0812 11:12:34.402321   40267 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0812 11:12:34.402335   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0812 11:12:34.402352   40267 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0812 11:12:34.402366   40267 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0812 11:12:34.402379   40267 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0812 11:12:34.402389   40267 command_runner.go:130] > #   deprecated option "conmon".
	I0812 11:12:34.402404   40267 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0812 11:12:34.402415   40267 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0812 11:12:34.402430   40267 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0812 11:12:34.402441   40267 command_runner.go:130] > #   should be moved to the container's cgroup
	I0812 11:12:34.402455   40267 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0812 11:12:34.402466   40267 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0812 11:12:34.402478   40267 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0812 11:12:34.402489   40267 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0812 11:12:34.402496   40267 command_runner.go:130] > #
	I0812 11:12:34.402504   40267 command_runner.go:130] > # Using the seccomp notifier feature:
	I0812 11:12:34.402511   40267 command_runner.go:130] > #
	I0812 11:12:34.402521   40267 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0812 11:12:34.402535   40267 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0812 11:12:34.402543   40267 command_runner.go:130] > #
	I0812 11:12:34.402554   40267 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0812 11:12:34.402567   40267 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0812 11:12:34.402575   40267 command_runner.go:130] > #
	I0812 11:12:34.402585   40267 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0812 11:12:34.402594   40267 command_runner.go:130] > # feature.
	I0812 11:12:34.402599   40267 command_runner.go:130] > #
	I0812 11:12:34.402609   40267 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0812 11:12:34.402623   40267 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0812 11:12:34.402635   40267 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0812 11:12:34.402649   40267 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0812 11:12:34.402661   40267 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0812 11:12:34.402669   40267 command_runner.go:130] > #
	I0812 11:12:34.402679   40267 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0812 11:12:34.402692   40267 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0812 11:12:34.402700   40267 command_runner.go:130] > #
	I0812 11:12:34.402710   40267 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0812 11:12:34.402728   40267 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0812 11:12:34.402744   40267 command_runner.go:130] > #
	I0812 11:12:34.402755   40267 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0812 11:12:34.402766   40267 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0812 11:12:34.402775   40267 command_runner.go:130] > # limitation.
	I0812 11:12:34.402785   40267 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0812 11:12:34.402796   40267 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0812 11:12:34.402803   40267 command_runner.go:130] > runtime_type = "oci"
	I0812 11:12:34.402811   40267 command_runner.go:130] > runtime_root = "/run/runc"
	I0812 11:12:34.402819   40267 command_runner.go:130] > runtime_config_path = ""
	I0812 11:12:34.402827   40267 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0812 11:12:34.402836   40267 command_runner.go:130] > monitor_cgroup = "pod"
	I0812 11:12:34.402843   40267 command_runner.go:130] > monitor_exec_cgroup = ""
	I0812 11:12:34.402853   40267 command_runner.go:130] > monitor_env = [
	I0812 11:12:34.402863   40267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 11:12:34.402870   40267 command_runner.go:130] > ]
	I0812 11:12:34.402879   40267 command_runner.go:130] > privileged_without_host_devices = false
	I0812 11:12:34.402892   40267 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0812 11:12:34.402903   40267 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0812 11:12:34.402916   40267 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0812 11:12:34.402931   40267 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0812 11:12:34.402945   40267 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0812 11:12:34.402958   40267 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0812 11:12:34.402975   40267 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0812 11:12:34.402989   40267 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0812 11:12:34.402997   40267 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0812 11:12:34.403005   40267 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0812 11:12:34.403011   40267 command_runner.go:130] > # Example:
	I0812 11:12:34.403018   40267 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0812 11:12:34.403025   40267 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0812 11:12:34.403033   40267 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0812 11:12:34.403042   40267 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0812 11:12:34.403049   40267 command_runner.go:130] > # cpuset = 0
	I0812 11:12:34.403056   40267 command_runner.go:130] > # cpushares = "0-1"
	I0812 11:12:34.403061   40267 command_runner.go:130] > # Where:
	I0812 11:12:34.403068   40267 command_runner.go:130] > # The workload name is workload-type.
	I0812 11:12:34.403087   40267 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0812 11:12:34.403097   40267 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0812 11:12:34.403106   40267 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0812 11:12:34.403119   40267 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0812 11:12:34.403128   40267 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0812 11:12:34.403137   40267 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0812 11:12:34.403147   40267 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0812 11:12:34.403155   40267 command_runner.go:130] > # Default value is set to true
	I0812 11:12:34.403162   40267 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0812 11:12:34.403170   40267 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0812 11:12:34.403178   40267 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0812 11:12:34.403186   40267 command_runner.go:130] > # Default value is set to 'false'
	I0812 11:12:34.403193   40267 command_runner.go:130] > # disable_hostport_mapping = false
	I0812 11:12:34.403202   40267 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0812 11:12:34.403210   40267 command_runner.go:130] > #
	I0812 11:12:34.403220   40267 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0812 11:12:34.403233   40267 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0812 11:12:34.403246   40267 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0812 11:12:34.403263   40267 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0812 11:12:34.403279   40267 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0812 11:12:34.403287   40267 command_runner.go:130] > [crio.image]
	I0812 11:12:34.403297   40267 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0812 11:12:34.403307   40267 command_runner.go:130] > # default_transport = "docker://"
	I0812 11:12:34.403320   40267 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0812 11:12:34.403334   40267 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0812 11:12:34.403344   40267 command_runner.go:130] > # global_auth_file = ""
	I0812 11:12:34.403360   40267 command_runner.go:130] > # The image used to instantiate infra containers.
	I0812 11:12:34.403372   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.403384   40267 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0812 11:12:34.403397   40267 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0812 11:12:34.403410   40267 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0812 11:12:34.403422   40267 command_runner.go:130] > # This option supports live configuration reload.
	I0812 11:12:34.403432   40267 command_runner.go:130] > # pause_image_auth_file = ""
	I0812 11:12:34.403445   40267 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0812 11:12:34.403458   40267 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0812 11:12:34.403468   40267 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0812 11:12:34.403487   40267 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0812 11:12:34.403498   40267 command_runner.go:130] > # pause_command = "/pause"
	I0812 11:12:34.403510   40267 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0812 11:12:34.403524   40267 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0812 11:12:34.403537   40267 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0812 11:12:34.403554   40267 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0812 11:12:34.403567   40267 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0812 11:12:34.403579   40267 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0812 11:12:34.403590   40267 command_runner.go:130] > # pinned_images = [
	I0812 11:12:34.403596   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403608   40267 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0812 11:12:34.403621   40267 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0812 11:12:34.403635   40267 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0812 11:12:34.403648   40267 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0812 11:12:34.403669   40267 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0812 11:12:34.403677   40267 command_runner.go:130] > # signature_policy = ""
	I0812 11:12:34.403687   40267 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0812 11:12:34.403702   40267 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0812 11:12:34.403715   40267 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0812 11:12:34.403729   40267 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0812 11:12:34.403742   40267 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0812 11:12:34.403753   40267 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0812 11:12:34.403765   40267 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0812 11:12:34.403776   40267 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0812 11:12:34.403785   40267 command_runner.go:130] > # changing them here.
	I0812 11:12:34.403793   40267 command_runner.go:130] > # insecure_registries = [
	I0812 11:12:34.403801   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403812   40267 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0812 11:12:34.403823   40267 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0812 11:12:34.403833   40267 command_runner.go:130] > # image_volumes = "mkdir"
	I0812 11:12:34.403842   40267 command_runner.go:130] > # Temporary directory to use for storing big files
	I0812 11:12:34.403852   40267 command_runner.go:130] > # big_files_temporary_dir = ""
	I0812 11:12:34.403863   40267 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0812 11:12:34.403876   40267 command_runner.go:130] > # CNI plugins.
	I0812 11:12:34.403884   40267 command_runner.go:130] > [crio.network]
	I0812 11:12:34.403895   40267 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0812 11:12:34.403914   40267 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0812 11:12:34.403924   40267 command_runner.go:130] > # cni_default_network = ""
	I0812 11:12:34.403935   40267 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0812 11:12:34.403943   40267 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0812 11:12:34.403955   40267 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0812 11:12:34.403964   40267 command_runner.go:130] > # plugin_dirs = [
	I0812 11:12:34.403972   40267 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0812 11:12:34.403980   40267 command_runner.go:130] > # ]
	I0812 11:12:34.403990   40267 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0812 11:12:34.403999   40267 command_runner.go:130] > [crio.metrics]
	I0812 11:12:34.404007   40267 command_runner.go:130] > # Globally enable or disable metrics support.
	I0812 11:12:34.404017   40267 command_runner.go:130] > enable_metrics = true
	I0812 11:12:34.404025   40267 command_runner.go:130] > # Specify enabled metrics collectors.
	I0812 11:12:34.404033   40267 command_runner.go:130] > # Per default all metrics are enabled.
	I0812 11:12:34.404046   40267 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0812 11:12:34.404059   40267 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0812 11:12:34.404071   40267 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0812 11:12:34.404080   40267 command_runner.go:130] > # metrics_collectors = [
	I0812 11:12:34.404088   40267 command_runner.go:130] > # 	"operations",
	I0812 11:12:34.404100   40267 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0812 11:12:34.404107   40267 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0812 11:12:34.404114   40267 command_runner.go:130] > # 	"operations_errors",
	I0812 11:12:34.404124   40267 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0812 11:12:34.404133   40267 command_runner.go:130] > # 	"image_pulls_by_name",
	I0812 11:12:34.404142   40267 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0812 11:12:34.404151   40267 command_runner.go:130] > # 	"image_pulls_failures",
	I0812 11:12:34.404161   40267 command_runner.go:130] > # 	"image_pulls_successes",
	I0812 11:12:34.404170   40267 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0812 11:12:34.404179   40267 command_runner.go:130] > # 	"image_layer_reuse",
	I0812 11:12:34.404188   40267 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0812 11:12:34.404197   40267 command_runner.go:130] > # 	"containers_oom_total",
	I0812 11:12:34.404205   40267 command_runner.go:130] > # 	"containers_oom",
	I0812 11:12:34.404214   40267 command_runner.go:130] > # 	"processes_defunct",
	I0812 11:12:34.404222   40267 command_runner.go:130] > # 	"operations_total",
	I0812 11:12:34.404230   40267 command_runner.go:130] > # 	"operations_latency_seconds",
	I0812 11:12:34.404240   40267 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0812 11:12:34.404256   40267 command_runner.go:130] > # 	"operations_errors_total",
	I0812 11:12:34.404267   40267 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0812 11:12:34.404277   40267 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0812 11:12:34.404286   40267 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0812 11:12:34.404294   40267 command_runner.go:130] > # 	"image_pulls_success_total",
	I0812 11:12:34.404303   40267 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0812 11:12:34.404312   40267 command_runner.go:130] > # 	"containers_oom_count_total",
	I0812 11:12:34.404321   40267 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0812 11:12:34.404331   40267 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0812 11:12:34.404337   40267 command_runner.go:130] > # ]
	I0812 11:12:34.404351   40267 command_runner.go:130] > # The port on which the metrics server will listen.
	I0812 11:12:34.404361   40267 command_runner.go:130] > # metrics_port = 9090
	I0812 11:12:34.404371   40267 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0812 11:12:34.404381   40267 command_runner.go:130] > # metrics_socket = ""
	I0812 11:12:34.404390   40267 command_runner.go:130] > # The certificate for the secure metrics server.
	I0812 11:12:34.404402   40267 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0812 11:12:34.404413   40267 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0812 11:12:34.404424   40267 command_runner.go:130] > # certificate on any modification event.
	I0812 11:12:34.404434   40267 command_runner.go:130] > # metrics_cert = ""
	I0812 11:12:34.404446   40267 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0812 11:12:34.404457   40267 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0812 11:12:34.404465   40267 command_runner.go:130] > # metrics_key = ""
	I0812 11:12:34.404475   40267 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0812 11:12:34.404483   40267 command_runner.go:130] > [crio.tracing]
	I0812 11:12:34.404492   40267 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0812 11:12:34.404501   40267 command_runner.go:130] > # enable_tracing = false
	I0812 11:12:34.404511   40267 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0812 11:12:34.404521   40267 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0812 11:12:34.404536   40267 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0812 11:12:34.404547   40267 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0812 11:12:34.404555   40267 command_runner.go:130] > # CRI-O NRI configuration.
	I0812 11:12:34.404562   40267 command_runner.go:130] > [crio.nri]
	I0812 11:12:34.404569   40267 command_runner.go:130] > # Globally enable or disable NRI.
	I0812 11:12:34.404576   40267 command_runner.go:130] > # enable_nri = false
	I0812 11:12:34.404586   40267 command_runner.go:130] > # NRI socket to listen on.
	I0812 11:12:34.404595   40267 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0812 11:12:34.404609   40267 command_runner.go:130] > # NRI plugin directory to use.
	I0812 11:12:34.404621   40267 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0812 11:12:34.404630   40267 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0812 11:12:34.404641   40267 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0812 11:12:34.404653   40267 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0812 11:12:34.404662   40267 command_runner.go:130] > # nri_disable_connections = false
	I0812 11:12:34.404672   40267 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0812 11:12:34.404682   40267 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0812 11:12:34.404692   40267 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0812 11:12:34.404703   40267 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0812 11:12:34.404716   40267 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0812 11:12:34.404724   40267 command_runner.go:130] > [crio.stats]
	I0812 11:12:34.404735   40267 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0812 11:12:34.404747   40267 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0812 11:12:34.404756   40267 command_runner.go:130] > # stats_collection_period = 0
	I0812 11:12:34.404952   40267 cni.go:84] Creating CNI manager for ""
	I0812 11:12:34.404969   40267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 11:12:34.404983   40267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:12:34.405021   40267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-053297 NodeName:multinode-053297 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:12:34.405189   40267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-053297"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:12:34.405269   40267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:12:34.415663   40267 command_runner.go:130] > kubeadm
	I0812 11:12:34.415686   40267 command_runner.go:130] > kubectl
	I0812 11:12:34.415692   40267 command_runner.go:130] > kubelet
	I0812 11:12:34.415760   40267 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:12:34.415816   40267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:12:34.426102   40267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0812 11:12:34.444340   40267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:12:34.461435   40267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 11:12:34.477748   40267 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0812 11:12:34.481459   40267 command_runner.go:130] > 192.168.39.95	control-plane.minikube.internal
	I0812 11:12:34.481631   40267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:12:34.628531   40267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:12:34.643291   40267 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297 for IP: 192.168.39.95
	I0812 11:12:34.643315   40267 certs.go:194] generating shared ca certs ...
	I0812 11:12:34.643330   40267 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:12:34.643505   40267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:12:34.643548   40267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:12:34.643557   40267 certs.go:256] generating profile certs ...
	I0812 11:12:34.643630   40267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/client.key
	I0812 11:12:34.643687   40267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key.345acae3
	I0812 11:12:34.643730   40267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key
	I0812 11:12:34.643742   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 11:12:34.643756   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 11:12:34.643768   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 11:12:34.643780   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 11:12:34.643794   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 11:12:34.643812   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 11:12:34.643823   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 11:12:34.643845   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 11:12:34.643899   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:12:34.643926   40267 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:12:34.643935   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:12:34.643955   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:12:34.643978   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:12:34.643998   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:12:34.644033   40267 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:12:34.644059   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.644071   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem -> /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.644083   40267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> /usr/share/ca-certificates/109272.pem
	I0812 11:12:34.644731   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:12:34.668639   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:12:34.691797   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:12:34.715657   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:12:34.740103   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 11:12:34.763296   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:12:34.786358   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:12:34.811844   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/multinode-053297/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:12:34.834711   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:12:34.857318   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:12:34.880290   40267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:12:34.903578   40267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:12:34.919797   40267 ssh_runner.go:195] Run: openssl version
	I0812 11:12:34.925722   40267 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0812 11:12:34.925824   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:12:34.937142   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941727   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941762   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.941845   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:12:34.947460   40267 command_runner.go:130] > b5213941
	I0812 11:12:34.947538   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:12:34.956927   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:12:34.967735   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.972966   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.973010   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.973061   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:12:34.978934   40267 command_runner.go:130] > 51391683
	I0812 11:12:34.979082   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:12:34.989945   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:12:35.001965   40267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006644   40267 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006702   40267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.006759   40267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:12:35.012247   40267 command_runner.go:130] > 3ec20f2e
	I0812 11:12:35.012306   40267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:12:35.021891   40267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:12:35.026553   40267 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:12:35.026581   40267 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0812 11:12:35.026587   40267 command_runner.go:130] > Device: 253,1	Inode: 3150891     Links: 1
	I0812 11:12:35.026593   40267 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 11:12:35.026600   40267 command_runner.go:130] > Access: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026604   40267 command_runner.go:130] > Modify: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026609   40267 command_runner.go:130] > Change: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026614   40267 command_runner.go:130] >  Birth: 2024-08-12 11:05:34.660424698 +0000
	I0812 11:12:35.026672   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:12:35.032234   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.032341   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:12:35.037810   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.037891   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:12:35.043494   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.043594   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:12:35.049113   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.049198   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:12:35.054513   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.054644   40267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:12:35.059898   40267 command_runner.go:130] > Certificate will not expire
	I0812 11:12:35.060063   40267 kubeadm.go:392] StartCluster: {Name:multinode-053297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-053297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:12:35.060168   40267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:12:35.060225   40267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:12:35.097432   40267 command_runner.go:130] > 0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc
	I0812 11:12:35.097466   40267 command_runner.go:130] > 3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af
	I0812 11:12:35.097473   40267 command_runner.go:130] > a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec
	I0812 11:12:35.097480   40267 command_runner.go:130] > 8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229
	I0812 11:12:35.097486   40267 command_runner.go:130] > 8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e
	I0812 11:12:35.097492   40267 command_runner.go:130] > 7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e
	I0812 11:12:35.097497   40267 command_runner.go:130] > 09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9
	I0812 11:12:35.097504   40267 command_runner.go:130] > 87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1
	I0812 11:12:35.097525   40267 cri.go:89] found id: "0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc"
	I0812 11:12:35.097535   40267 cri.go:89] found id: "3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af"
	I0812 11:12:35.097540   40267 cri.go:89] found id: "a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec"
	I0812 11:12:35.097545   40267 cri.go:89] found id: "8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229"
	I0812 11:12:35.097552   40267 cri.go:89] found id: "8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e"
	I0812 11:12:35.097556   40267 cri.go:89] found id: "7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e"
	I0812 11:12:35.097558   40267 cri.go:89] found id: "09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9"
	I0812 11:12:35.097561   40267 cri.go:89] found id: "87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1"
	I0812 11:12:35.097564   40267 cri.go:89] found id: ""
	I0812 11:12:35.097606   40267 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.217627924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3688079-5eb2-462a-b985-2e3f278c3aae name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.218057733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3688079-5eb2-462a-b985-2e3f278c3aae name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.258424262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5084a192-bcd3-4404-b2e9-dbc5bb9e316e name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.258499460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5084a192-bcd3-4404-b2e9-dbc5bb9e316e name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.259479248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ce7468b-9528-41f5-a644-59ac894d786b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.259934549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461405259911897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ce7468b-9528-41f5-a644-59ac894d786b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.260639645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b823b4fe-d8f7-46b5-aeba-418c96a300d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.260698451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b823b4fe-d8f7-46b5-aeba-418c96a300d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.261078283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b823b4fe-d8f7-46b5-aeba-418c96a300d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.293404429Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=1cef4cad-b689-46d3-a792-07e3efbc4791 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.293487330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cef4cad-b689-46d3-a792-07e3efbc4791 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.305321546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6951da6f-16b5-44d1-8e55-eda07df3a5c9 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.305394558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6951da6f-16b5-44d1-8e55-eda07df3a5c9 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.306281066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b5eb48b-c861-45f9-bc47-3a2bb0297e63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.306702060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461405306680280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b5eb48b-c861-45f9-bc47-3a2bb0297e63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.307191261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9aeae891-2698-4aae-b17b-44f5bfd23e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.307348360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9aeae891-2698-4aae-b17b-44f5bfd23e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.307710657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9aeae891-2698-4aae-b17b-44f5bfd23e71 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.350092392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=665511b7-6229-41dd-bafa-e94e74becd4a name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.350168164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=665511b7-6229-41dd-bafa-e94e74becd4a name=/runtime.v1.RuntimeService/Version
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.351476918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84e0286e-cd78-4526-8525-48f34fd12762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.351957706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723461405351931839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84e0286e-cd78-4526-8525-48f34fd12762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.352710195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2108078-f4eb-4242-b49d-fdc9fdacdf1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.352766778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2108078-f4eb-4242-b49d-fdc9fdacdf1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:16:45 multinode-053297 crio[2908]: time="2024-08-12 11:16:45.353163841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a238277bdd5844905d0abd3010b3629f0ba5122534071ada2c359554ffcfefe4,PodSandboxId:5046a74c1c71263fe0c1fc31da48ecb6ccef4a9ed236f8bfb50e599dc086fe9d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723461195176023941,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27,PodSandboxId:72b35edc9899b10089c648b7ae810b0849349ba653534f376ca7e29b1d9be81a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723461161722363474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa,PodSandboxId:e1a03a69e69c192eb46b3f544870f6fa7a26d8dc7a926ef14105f1ecf7094dbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723461161635203952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a241641b58e72130b89d971b3451bc5e7ea0d5a6f6529e3370f6188b3d187129,PodSandboxId:3be9ca7cc9a867c5a7761497232d1272b39e21f9ff63bc52dfe6b467ef4ee851,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723461161531328150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},An
notations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db,PodSandboxId:a6d88ae6d013878557fb83239663ff4b4ba5cedc5114d2b8368f5a7c9f8984af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723461161467909942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35,PodSandboxId:59ccbd6d89362db134dfe2582fb6fa5e52f301253397ef65dbec1cc81b752d85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723461157610529668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee,PodSandboxId:32371f054a99685b6b4524564141b68dd12ce7edb1cba51e6bd197277c5cf1dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723461157575373929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a,PodSandboxId:279b3e1fb216cc39fd5b60d36b3f1ee844f581dc3c0cf6868adefee5c0adbcfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723461157556699459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: abe19987,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334,PodSandboxId:c84e5be14b24882149a8df99ca775da45b8f0adad91d2a948dd725e68524ddba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723461157547663390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a279470,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1820e892790ef1cdc1a89ebfe83de1d4679004f70abedea923bed03999d209a7,PodSandboxId:a2efa8f2392f6217fbc0ae5ab9634074f7b2de51f8c404d8450e1b69480781be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723460830843935028,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-242jl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bb7b665-dca0-4f7d-9582-b62b8c1a5e57,},Annotations:map[string]string{io.kubernetes.container.hash: 6592dc32,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc,PodSandboxId:d356d9ef0c3e603d8efab73c1d6a7d4b9537a376b97bca54f461a16b20cb4002,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723460774558683381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gs2rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4268e67b-f866-48c0-baff-19b34b4c2b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 4f033f82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed6125dc9e3a9e06ba87d5427205fc07c4c17e974db82389afdd4d8f9dcb9af,PodSandboxId:c56d1dff8718dc20d16f903ece084aef0e16dff90b62087f3035881f9d43bac6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723460774207174002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 87ca637d-1e99-4fbb-8b07-75b1d5100c35,},Annotations:map[string]string{io.kubernetes.container.hash: 83a5742d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec,PodSandboxId:96d6ebf847ab7492ffb8e9255dd06e1fe9e366bd2f8f110a7c451a6b30842734,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723460762597105223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-t65tb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 552ad659-4e0c-4004-8ed7-015c99592268,},Annotations:map[string]string{io.kubernetes.container.hash: 888a3696,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229,PodSandboxId:f401767a9adec5872e1f6075764e23ea29b9c4e729ebf70bd97da263f10e502a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723460758958979307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9c48w,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f528af29-5853-4435-a1f4-92d071412e75,},Annotations:map[string]string{io.kubernetes.container.hash: 47a53b4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e,PodSandboxId:7b56483787489824cc1be78de167c090000f56b4a7bc54b9ea5aced928015bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723460738370383886,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17979d34f09ad16ac279ea9b2a2794
70,},Annotations:map[string]string{io.kubernetes.container.hash: 5c62bb7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e,PodSandboxId:1fa45813f29d1a6cd5ac168bb19c426fb968217d3a14e4b97bf586eb9caaaa28,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723460738336231704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29ac6eb0af15a9bcb61d9820b92a38,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9,PodSandboxId:258e6b42c633ca59e111fa0a2af9c553ebfcdb54b1a3ddd58983e7175774b105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723460738307396597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bdd2e9222e690c3135b4a315afb6b59,},Annotations:map[string]st
ring{io.kubernetes.container.hash: abe19987,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1,PodSandboxId:a32608bd26a7fb908bc3b0f92163ca3921f050426b505c194ab170300a2ad84f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723460738267673878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-053297,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f92e50f16cfaa476a506d47caffa3c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2108078-f4eb-4242-b49d-fdc9fdacdf1b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a238277bdd584       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   5046a74c1c712       busybox-fc5497c4f-242jl
	e6c2e77ab819a       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   72b35edc9899b       kindnet-t65tb
	41532e164787b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   e1a03a69e69c1       coredns-7db6d8ff4d-gs2rm
	a241641b58e72       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   3be9ca7cc9a86       storage-provisioner
	de11bd5fb35f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   a6d88ae6d0138       kube-proxy-9c48w
	1a863a59bad8c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   59ccbd6d89362       kube-scheduler-multinode-053297
	bca9095639889       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   32371f054a996       kube-controller-manager-multinode-053297
	5f719b2750bca       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   279b3e1fb216c       kube-apiserver-multinode-053297
	132b15b2d16e0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   c84e5be14b248       etcd-multinode-053297
	1820e892790ef       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a2efa8f2392f6       busybox-fc5497c4f-242jl
	0971024fe2a93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   d356d9ef0c3e6       coredns-7db6d8ff4d-gs2rm
	3ed6125dc9e3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c56d1dff8718d       storage-provisioner
	a911be0f14009       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    10 minutes ago      Exited              kindnet-cni               0                   96d6ebf847ab7       kindnet-t65tb
	8f04ca85ef866       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   f401767a9adec       kube-proxy-9c48w
	8d101e8240261       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   7b56483787489       etcd-multinode-053297
	7e98b01dde217       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   1fa45813f29d1       kube-scheduler-multinode-053297
	09a8e5a83ca16       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   258e6b42c633c       kube-apiserver-multinode-053297
	87e5feab93ae2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   a32608bd26a7f       kube-controller-manager-multinode-053297
	
	
	==> coredns [0971024fe2a93e68dd91575b65f0053d40ec3b25ee41850f0628a96f3ee82fcc] <==
	[INFO] 10.244.1.2:56809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002139015s
	[INFO] 10.244.1.2:41138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152623s
	[INFO] 10.244.1.2:56293 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075426s
	[INFO] 10.244.1.2:35681 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539289s
	[INFO] 10.244.1.2:49715 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062349s
	[INFO] 10.244.1.2:57037 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077923s
	[INFO] 10.244.1.2:56569 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063543s
	[INFO] 10.244.0.3:45509 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078679s
	[INFO] 10.244.0.3:47636 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000038773s
	[INFO] 10.244.0.3:36470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034693s
	[INFO] 10.244.0.3:51400 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041683s
	[INFO] 10.244.1.2:38741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115896s
	[INFO] 10.244.1.2:47897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105871s
	[INFO] 10.244.1.2:34308 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088503s
	[INFO] 10.244.1.2:36210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006932s
	[INFO] 10.244.0.3:39563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087584s
	[INFO] 10.244.0.3:33056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000065255s
	[INFO] 10.244.0.3:57813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051078s
	[INFO] 10.244.0.3:40260 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070334s
	[INFO] 10.244.1.2:39761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141754s
	[INFO] 10.244.1.2:34700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077178s
	[INFO] 10.244.1.2:44691 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007049s
	[INFO] 10.244.1.2:50622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109723s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [41532e164787b2478ba8858fe3a1d85d3395bc69728456da6edd387d3270e6aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32805 - 60500 "HINFO IN 1183547355277371863.2435189660485626675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014960031s
	
	
	==> describe nodes <==
	Name:               multinode-053297
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-053297
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=multinode-053297
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_05_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:05:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-053297
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:16:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:12:40 +0000   Mon, 12 Aug 2024 11:06:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    multinode-053297
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9402e00ee03348edb40ff9f911ec78c9
	  System UUID:                9402e00e-e033-48ed-b40f-f9f911ec78c9
	  Boot ID:                    1e24d6d4-b18a-4791-90d4-b9c5725f429c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-242jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 coredns-7db6d8ff4d-gs2rm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-053297                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-t65tb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-053297             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-053297    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-9c48w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-053297             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-053297 event: Registered Node multinode-053297 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-053297 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m9s)  kubelet          Node multinode-053297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m9s)  kubelet          Node multinode-053297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m9s)  kubelet          Node multinode-053297 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m53s                node-controller  Node multinode-053297 event: Registered Node multinode-053297 in Controller
	
	
	Name:               multinode-053297-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-053297-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=multinode-053297
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T11_13_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:13:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-053297-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:14:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:15:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:15:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:15:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 11:13:50 +0000   Mon, 12 Aug 2024 11:15:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    multinode-053297-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c7437527ba44e21af49c437482262f8
	  System UUID:                3c743752-7ba4-4e21-af49-c437482262f8
	  Boot ID:                    0f070c0c-689f-42cd-a17b-70c8ff293cd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hrnrt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-glm6n              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-wmdlz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-053297-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-053297-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-053297-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-053297-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-053297-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-053297-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-053297-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-053297-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-053297-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.071181] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.200803] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.112004] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.274079] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.118414] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.020345] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.064595] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994625] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.070271] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.329425] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.354695] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[Aug12 11:06] kauditd_printk_skb: 60 callbacks suppressed
	[Aug12 11:07] kauditd_printk_skb: 14 callbacks suppressed
	[Aug12 11:12] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.156041] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.170886] systemd-fstab-generator[2854]: Ignoring "noauto" option for root device
	[  +0.153664] systemd-fstab-generator[2866]: Ignoring "noauto" option for root device
	[  +0.282629] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +8.567549] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[  +0.094114] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.071047] systemd-fstab-generator[3114]: Ignoring "noauto" option for root device
	[  +4.710370] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.500750] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.401140] systemd-fstab-generator[3946]: Ignoring "noauto" option for root device
	[Aug12 11:13] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [132b15b2d16e01e3b4d236e00ddd2c5adfb7649c2d4d0faed9f1c49f75b59334] <==
	{"level":"info","ts":"2024-08-12T11:12:37.888353Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T11:12:37.888462Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T11:12:37.900324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 switched to configuration voters=(47039837626653079)"}
	{"level":"info","ts":"2024-08-12T11:12:37.900463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","added-peer-id":"a71e7bac075997","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-12T11:12:37.900616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:12:37.90066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:12:37.913241Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T11:12:37.913362Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:12:37.913491Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:12:37.914298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a71e7bac075997","initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T11:12:37.914777Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T11:12:38.913871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgPreVoteResp from a71e7bac075997 at term 2"}
	{"level":"info","ts":"2024-08-12T11:12:38.913995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.914062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-08-12T11:12:38.916667Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:multinode-053297 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:12:38.91672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:12:38.917244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:12:38.919562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.95:2379"}
	{"level":"info","ts":"2024-08-12T11:12:38.922696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:12:38.932884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:12:38.932933Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [8d101e8240261ba6812982626be96b5fb5a63df6a9e1ec6133b9c493d3c8b63e] <==
	{"level":"info","ts":"2024-08-12T11:06:45.189717Z","caller":"traceutil/trace.go:171","msg":"trace[438084952] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"185.043578ms","start":"2024-08-12T11:06:45.004653Z","end":"2024-08-12T11:06:45.189696Z","steps":["trace[438084952] 'process raft request'  (duration: 184.964897ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:06:45.190026Z","caller":"traceutil/trace.go:171","msg":"trace[378657706] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"227.712542ms","start":"2024-08-12T11:06:44.962305Z","end":"2024-08-12T11:06:45.190018Z","steps":["trace[378657706] 'read index received'  (duration: 64.207681ms)","trace[378657706] 'applied index is now lower than readState.Index'  (duration: 163.50418ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T11:06:45.190212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.892087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:06:45.192138Z","caller":"traceutil/trace.go:171","msg":"trace[1306728832] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:498; }","duration":"229.836248ms","start":"2024-08-12T11:06:44.962281Z","end":"2024-08-12T11:06:45.192117Z","steps":["trace[1306728832] 'agreement among raft nodes before linearized reading'  (duration: 227.821754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:07:39.099264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.270024ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6455788321831307450 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-053297-m03.17eaf68513057458\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-053297-m03.17eaf68513057458\" value_size:642 lease:6455788321831307001 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-12T11:07:39.099559Z","caller":"traceutil/trace.go:171","msg":"trace[483768616] linearizableReadLoop","detail":"{readStateIndex:677; appliedIndex:675; }","duration":"135.205795ms","start":"2024-08-12T11:07:38.964326Z","end":"2024-08-12T11:07:39.099532Z","steps":["trace[483768616] 'read index received'  (duration: 133.17938ms)","trace[483768616] 'applied index is now lower than readState.Index'  (duration: 2.025783ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T11:07:39.099656Z","caller":"traceutil/trace.go:171","msg":"trace[1152437288] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"177.9057ms","start":"2024-08-12T11:07:38.921743Z","end":"2024-08-12T11:07:39.099649Z","steps":["trace[1152437288] 'process raft request'  (duration: 177.736824ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:07:39.099687Z","caller":"traceutil/trace.go:171","msg":"trace[1054135088] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"245.754476ms","start":"2024-08-12T11:07:38.853917Z","end":"2024-08-12T11:07:39.099671Z","steps":["trace[1054135088] 'process raft request'  (duration: 58.584548ms)","trace[1054135088] 'compare'  (duration: 186.187131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T11:07:39.09997Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.653176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:07:39.100029Z","caller":"traceutil/trace.go:171","msg":"trace[2070505261] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:635; }","duration":"135.733726ms","start":"2024-08-12T11:07:38.964282Z","end":"2024-08-12T11:07:39.100015Z","steps":["trace[2070505261] 'agreement among raft nodes before linearized reading'  (duration: 135.580265ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:07:47.060183Z","caller":"traceutil/trace.go:171","msg":"trace[1920807292] linearizableReadLoop","detail":"{readStateIndex:725; appliedIndex:724; }","duration":"215.381038ms","start":"2024-08-12T11:07:46.844779Z","end":"2024-08-12T11:07:47.06016Z","steps":["trace[1920807292] 'read index received'  (duration: 215.140908ms)","trace[1920807292] 'applied index is now lower than readState.Index'  (duration: 239.06µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T11:07:47.060278Z","caller":"traceutil/trace.go:171","msg":"trace[1939185951] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"229.732803ms","start":"2024-08-12T11:07:46.830536Z","end":"2024-08-12T11:07:47.060269Z","steps":["trace[1939185951] 'process raft request'  (duration: 229.429246ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:07:47.06076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.962054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-12T11:07:47.060839Z","caller":"traceutil/trace.go:171","msg":"trace[1526192585] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:678; }","duration":"216.074499ms","start":"2024-08-12T11:07:46.844754Z","end":"2024-08-12T11:07:47.060829Z","steps":["trace[1526192585] 'agreement among raft nodes before linearized reading'  (duration: 215.961305ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:08:33.417102Z","caller":"traceutil/trace.go:171","msg":"trace[1729162249] transaction","detail":"{read_only:false; response_revision:763; number_of_response:1; }","duration":"116.483812ms","start":"2024-08-12T11:08:33.300588Z","end":"2024-08-12T11:08:33.417072Z","steps":["trace[1729162249] 'process raft request'  (duration: 116.376117ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:10:53.754232Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T11:10:53.754351Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-053297","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	{"level":"warn","ts":"2024-08-12T11:10:53.754501Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.754605Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.833442Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T11:10:53.833485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T11:10:53.83356Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a71e7bac075997","current-leader-member-id":"a71e7bac075997"}
	{"level":"info","ts":"2024-08-12T11:10:53.836252Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:10:53.83641Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-12T11:10:53.836444Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-053297","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	
	
	==> kernel <==
	 11:16:45 up 11 min,  0 users,  load average: 0.10, 0.13, 0.09
	Linux multinode-053297 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a911be0f1400957d10189ab0274b18180559feb17c632377665040859f3a01ec] <==
	I0812 11:10:13.581964       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:23.586195       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:23.586382       1 main.go:299] handling current node
	I0812 11:10:23.586422       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:23.586441       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:23.586625       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:23.586705       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:33.579030       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:33.579063       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:33.579210       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:33.579215       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:33.579348       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:33.579355       1 main.go:299] handling current node
	I0812 11:10:43.581118       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:43.581154       1 main.go:299] handling current node
	I0812 11:10:43.581172       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:43.581177       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:43.581328       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:43.581333       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:53.587560       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:10:53.587604       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:10:53.587757       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0812 11:10:53.587764       1 main.go:322] Node multinode-053297-m03 has CIDR [10.244.3.0/24] 
	I0812 11:10:53.587853       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:10:53.587859       1 main.go:299] handling current node
	
	
	==> kindnet [e6c2e77ab819a29eb0e4f2c3452a661ad97ceed1d3e7a641e515d58b7a0bba27] <==
	I0812 11:15:42.671995       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:15:52.671356       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:15:52.671543       1 main.go:299] handling current node
	I0812 11:15:52.671580       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:15:52.671648       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:02.671374       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:16:02.672349       1 main.go:299] handling current node
	I0812 11:16:02.672390       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:16:02.672399       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:12.672016       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:16:12.672115       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:12.672283       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:16:12.672301       1 main.go:299] handling current node
	I0812 11:16:22.676955       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:16:22.677028       1 main.go:299] handling current node
	I0812 11:16:22.677050       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:16:22.677056       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:32.680654       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:16:32.680790       1 main.go:299] handling current node
	I0812 11:16:32.680899       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:16:32.680921       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:42.671703       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0812 11:16:42.671871       1 main.go:322] Node multinode-053297-m02 has CIDR [10.244.1.0/24] 
	I0812 11:16:42.672040       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0812 11:16:42.672068       1 main.go:299] handling current node
	
	
	==> kube-apiserver [09a8e5a83ca1641d7a329a605b044bc9ec82ed50e1ce7016c7fc516380488ab9] <==
	E0812 11:07:12.660449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60066: use of closed network connection
	E0812 11:07:12.829238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60078: use of closed network connection
	E0812 11:07:12.994686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60084: use of closed network connection
	E0812 11:07:13.157558       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60096: use of closed network connection
	E0812 11:07:13.426425       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60112: use of closed network connection
	E0812 11:07:13.620653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60130: use of closed network connection
	E0812 11:07:13.785491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60148: use of closed network connection
	E0812 11:07:13.955005       1 conn.go:339] Error on socket receive: read tcp 192.168.39.95:8443->192.168.39.1:60174: use of closed network connection
	I0812 11:10:53.753842       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0812 11:10:53.757571       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.757723       1 logging.go:59] [core] [Channel #14 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.757750       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.786938       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787018       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787059       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787110       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787179       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787229       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787315       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787371       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787427       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787466       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787518       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787569       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:10:53.787606       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5f719b2750bcaaced7ca32c1946a693dd7d09ae45a6292d87eab5c88196f9a9a] <==
	I0812 11:12:40.362901       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 11:12:40.363080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 11:12:40.364770       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 11:12:40.364928       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 11:12:40.365476       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 11:12:40.365499       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 11:12:40.366028       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 11:12:40.372705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 11:12:40.379501       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 11:12:40.379619       1 policy_source.go:224] refreshing policies
	I0812 11:12:40.383164       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 11:12:40.386697       1 aggregator.go:165] initial CRD sync complete...
	I0812 11:12:40.386765       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 11:12:40.386773       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 11:12:40.386780       1 cache.go:39] Caches are synced for autoregister controller
	E0812 11:12:40.387335       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0812 11:12:40.463558       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 11:12:41.269768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 11:12:42.496422       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 11:12:42.634062       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 11:12:42.648778       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 11:12:42.722129       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 11:12:42.729276       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 11:12:52.764873       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 11:12:52.862129       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [87e5feab93ae29a05379e2f351e9c8355a4f866d237d4549c6c1992523cecef1] <==
	I0812 11:06:45.197969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m02\" does not exist"
	I0812 11:06:45.209246       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m02" podCIDRs=["10.244.1.0/24"]
	I0812 11:06:47.462512       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-053297-m02"
	I0812 11:07:05.497206       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:07:07.820310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.121063ms"
	I0812 11:07:07.827711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.339141ms"
	I0812 11:07:07.853786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.008619ms"
	I0812 11:07:07.854018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.149µs"
	I0812 11:07:11.262097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.230967ms"
	I0812 11:07:11.262255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.943µs"
	I0812 11:07:11.880512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.446584ms"
	I0812 11:07:11.880593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.477µs"
	I0812 11:07:39.102749       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:07:39.103057       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:07:39.130558       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.2.0/24"]
	I0812 11:07:42.488578       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-053297-m03"
	I0812 11:07:59.715346       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:28.044370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:29.123102       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:08:29.123421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:08:29.146270       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.3.0/24"]
	I0812 11:08:48.312095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:09:32.543690       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:09:32.607124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.66078ms"
	I0812 11:09:32.607208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.523µs"
	
	
	==> kube-controller-manager [bca90956398890550d64b7ae94e3ad47cacec831627c1ad0ec287a485e04a8ee] <==
	I0812 11:13:19.647600       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m02" podCIDRs=["10.244.1.0/24"]
	I0812 11:13:21.550011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.924µs"
	I0812 11:13:21.564571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.544µs"
	I0812 11:13:21.577391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.499µs"
	I0812 11:13:21.606757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.17µs"
	I0812 11:13:21.616073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.639µs"
	I0812 11:13:21.620442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.973µs"
	I0812 11:13:23.730096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.258µs"
	I0812 11:13:39.406273       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:39.426541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.074µs"
	I0812 11:13:39.442388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.396µs"
	I0812 11:13:43.087593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.107604ms"
	I0812 11:13:43.087688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.293µs"
	I0812 11:13:57.704908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:58.738424       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-053297-m03\" does not exist"
	I0812 11:13:58.738597       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:13:58.748211       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-053297-m03" podCIDRs=["10.244.2.0/24"]
	I0812 11:14:18.539440       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:14:23.932976       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-053297-m02"
	I0812 11:15:02.816033       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.234433ms"
	I0812 11:15:02.817662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.893µs"
	I0812 11:15:12.666248       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6nwk2"
	I0812 11:15:12.690526       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6nwk2"
	I0812 11:15:12.690565       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d2j9k"
	I0812 11:15:12.721654       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d2j9k"
	
	
	==> kube-proxy [8f04ca85ef86602d88590a245bc263472aa6a03ddbee946668f6b1ce2bc10229] <==
	I0812 11:05:59.258240       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:05:59.309869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0812 11:05:59.403693       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:05:59.403767       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:05:59.403834       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:05:59.414858       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:05:59.415469       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:05:59.415484       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:05:59.417542       1 config.go:192] "Starting service config controller"
	I0812 11:05:59.418406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:05:59.418637       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:05:59.418644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:05:59.420760       1 config.go:319] "Starting node config controller"
	I0812 11:05:59.420767       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:05:59.520920       1 shared_informer.go:320] Caches are synced for node config
	I0812 11:05:59.520953       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:05:59.520982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de11bd5fb35f68c340162aa9fb9dfdbc5361bcd9d722e42f8d920be459f852db] <==
	I0812 11:12:41.800339       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:12:41.818676       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0812 11:12:41.913502       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:12:41.913570       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:12:41.913588       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:12:41.916022       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:12:41.916220       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:12:41.916246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:12:41.918176       1 config.go:192] "Starting service config controller"
	I0812 11:12:41.918213       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:12:41.918243       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:12:41.918246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:12:41.918777       1 config.go:319] "Starting node config controller"
	I0812 11:12:41.920857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:12:42.018483       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:12:42.018570       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:12:42.021092       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a863a59bad8c866e770b207ff1b6065b57aefa733c7a7f3eb8cb7fcc93b2d35] <==
	I0812 11:12:38.791447       1 serving.go:380] Generated self-signed cert in-memory
	I0812 11:12:40.392275       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 11:12:40.392313       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:12:40.398889       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 11:12:40.398976       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0812 11:12:40.398983       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0812 11:12:40.399005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 11:12:40.404021       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 11:12:40.404078       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 11:12:40.404121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0812 11:12:40.404129       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0812 11:12:40.500146       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0812 11:12:40.505001       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 11:12:40.505101       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [7e98b01dde217b13d66ed5c05501eace36aa404485298db337b69ff6cc4f635e] <==
	E0812 11:05:41.812969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 11:05:41.876095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:05:41.876140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:05:41.933559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:41.933662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:41.934187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 11:05:41.934248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 11:05:41.975212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:41.975254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.021230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:42.021342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.040880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 11:05:42.040924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:05:42.143350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:05:42.143447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 11:05:42.169645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:05:42.169760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:05:42.173504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:05:42.173626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:05:42.286583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:05:42.286846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:05:42.326651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:05:42.327192       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 11:05:45.081501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 11:10:53.765714       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020501    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552ad659-4e0c-4004-8ed7-015c99592268-xtables-lock\") pod \"kindnet-t65tb\" (UID: \"552ad659-4e0c-4004-8ed7-015c99592268\") " pod="kube-system/kindnet-t65tb"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020557    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552ad659-4e0c-4004-8ed7-015c99592268-lib-modules\") pod \"kindnet-t65tb\" (UID: \"552ad659-4e0c-4004-8ed7-015c99592268\") " pod="kube-system/kindnet-t65tb"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020579    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f528af29-5853-4435-a1f4-92d071412e75-lib-modules\") pod \"kube-proxy-9c48w\" (UID: \"f528af29-5853-4435-a1f4-92d071412e75\") " pod="kube-system/kube-proxy-9c48w"
	Aug 12 11:12:41 multinode-053297 kubelet[3121]: I0812 11:12:41.020652    3121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/87ca637d-1e99-4fbb-8b07-75b1d5100c35-tmp\") pod \"storage-provisioner\" (UID: \"87ca637d-1e99-4fbb-8b07-75b1d5100c35\") " pod="kube-system/storage-provisioner"
	Aug 12 11:12:47 multinode-053297 kubelet[3121]: I0812 11:12:47.588854    3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 12 11:13:36 multinode-053297 kubelet[3121]: E0812 11:13:36.966859    3121 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:13:36 multinode-053297 kubelet[3121]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:14:36 multinode-053297 kubelet[3121]: E0812 11:14:36.966118    3121 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:14:36 multinode-053297 kubelet[3121]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:14:36 multinode-053297 kubelet[3121]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:14:36 multinode-053297 kubelet[3121]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:14:36 multinode-053297 kubelet[3121]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:15:36 multinode-053297 kubelet[3121]: E0812 11:15:36.967650    3121 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:15:36 multinode-053297 kubelet[3121]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:15:36 multinode-053297 kubelet[3121]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:15:36 multinode-053297 kubelet[3121]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:15:36 multinode-053297 kubelet[3121]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:16:36 multinode-053297 kubelet[3121]: E0812 11:16:36.969222    3121 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:16:36 multinode-053297 kubelet[3121]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:16:36 multinode-053297 kubelet[3121]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:16:36 multinode-053297 kubelet[3121]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:16:36 multinode-053297 kubelet[3121]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:16:44.943627   42222 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19409-3774/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-053297 -n multinode-053297
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-053297 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.45s)

                                                
                                    
x
+
TestPreload (269.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-028901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0812 11:20:45.936220   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-028901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.791055393s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-028901 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-028901 image pull gcr.io/k8s-minikube/busybox: (2.79266969s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-028901
E0812 11:23:14.023219   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 11:23:30.976075   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-028901: exit status 82 (2m0.477837491s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-028901"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-028901 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-12 11:24:48.294971802 +0000 UTC m=+3877.451894372
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-028901 -n test-preload-028901
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-028901 -n test-preload-028901: exit status 3 (18.585982634s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:25:06.877297   45548 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0812 11:25:06.877315   45548 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-028901" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-028901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-028901
--- FAIL: TestPreload (269.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (726.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m50.279256334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-535697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-535697" primary control-plane node in "kubernetes-upgrade-535697" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:27:02.871909   46645 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:27:02.873049   46645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:27:02.873066   46645 out.go:304] Setting ErrFile to fd 2...
	I0812 11:27:02.873086   46645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:27:02.873620   46645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:27:02.875856   46645 out.go:298] Setting JSON to false
	I0812 11:27:02.876681   46645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4164,"bootTime":1723457859,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:27:02.876737   46645 start.go:139] virtualization: kvm guest
	I0812 11:27:02.878448   46645 out.go:177] * [kubernetes-upgrade-535697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:27:02.880522   46645 notify.go:220] Checking for updates...
	I0812 11:27:02.881394   46645 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:27:02.882893   46645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:27:02.885353   46645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:27:02.887947   46645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:27:02.889367   46645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:27:02.890665   46645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:27:02.892444   46645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:27:02.935360   46645 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 11:27:02.936733   46645 start.go:297] selected driver: kvm2
	I0812 11:27:02.936755   46645 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:27:02.936769   46645 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:27:02.937753   46645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:27:02.955139   46645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:27:02.973318   46645 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:27:02.973385   46645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:27:02.973684   46645 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 11:27:02.973759   46645 cni.go:84] Creating CNI manager for ""
	I0812 11:27:02.973777   46645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:27:02.973794   46645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:27:02.973874   46645 start.go:340] cluster config:
	{Name:kubernetes-upgrade-535697 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-535697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:27:02.974027   46645 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:27:02.976796   46645 out.go:177] * Starting "kubernetes-upgrade-535697" primary control-plane node in "kubernetes-upgrade-535697" cluster
	I0812 11:27:02.978252   46645 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:27:02.978304   46645 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:27:02.978316   46645 cache.go:56] Caching tarball of preloaded images
	I0812 11:27:02.978414   46645 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:27:02.978428   46645 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0812 11:27:02.978768   46645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/config.json ...
	I0812 11:27:02.978799   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/config.json: {Name:mk871c24d35f256c31570ceb0e89d74f5fda5c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:02.978973   46645 start.go:360] acquireMachinesLock for kubernetes-upgrade-535697: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:27:27.157537   46645 start.go:364] duration metric: took 24.178518657s to acquireMachinesLock for "kubernetes-upgrade-535697"
	I0812 11:27:27.157605   46645 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-535697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-535697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:27:27.157719   46645 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 11:27:27.159742   46645 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 11:27:27.159988   46645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:27.160052   46645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:27.177670   46645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I0812 11:27:27.178090   46645 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:27.178703   46645 main.go:141] libmachine: Using API Version  1
	I0812 11:27:27.178731   46645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:27.179100   46645 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:27.179279   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetMachineName
	I0812 11:27:27.179503   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:27.179674   46645 start.go:159] libmachine.API.Create for "kubernetes-upgrade-535697" (driver="kvm2")
	I0812 11:27:27.179699   46645 client.go:168] LocalClient.Create starting
	I0812 11:27:27.179731   46645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 11:27:27.179778   46645 main.go:141] libmachine: Decoding PEM data...
	I0812 11:27:27.179801   46645 main.go:141] libmachine: Parsing certificate...
	I0812 11:27:27.179869   46645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 11:27:27.179893   46645 main.go:141] libmachine: Decoding PEM data...
	I0812 11:27:27.179914   46645 main.go:141] libmachine: Parsing certificate...
	I0812 11:27:27.179938   46645 main.go:141] libmachine: Running pre-create checks...
	I0812 11:27:27.179951   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .PreCreateCheck
	I0812 11:27:27.180308   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetConfigRaw
	I0812 11:27:27.180746   46645 main.go:141] libmachine: Creating machine...
	I0812 11:27:27.180759   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Create
	I0812 11:27:27.180927   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Creating KVM machine...
	I0812 11:27:27.182521   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found existing default KVM network
	I0812 11:27:27.183701   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.183521   46991 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:88:39} reservation:<nil>}
	I0812 11:27:27.184601   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.184520   46991 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0812 11:27:27.184644   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | created network xml: 
	I0812 11:27:27.184665   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | <network>
	I0812 11:27:27.184702   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   <name>mk-kubernetes-upgrade-535697</name>
	I0812 11:27:27.184727   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   <dns enable='no'/>
	I0812 11:27:27.184737   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   
	I0812 11:27:27.184747   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0812 11:27:27.184757   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |     <dhcp>
	I0812 11:27:27.184777   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0812 11:27:27.184791   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |     </dhcp>
	I0812 11:27:27.184805   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   </ip>
	I0812 11:27:27.184816   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG |   
	I0812 11:27:27.184827   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | </network>
	I0812 11:27:27.184842   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | 
	I0812 11:27:27.190041   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | trying to create private KVM network mk-kubernetes-upgrade-535697 192.168.50.0/24...
	I0812 11:27:27.264237   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | private KVM network mk-kubernetes-upgrade-535697 192.168.50.0/24 created
	I0812 11:27:27.264268   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.264191   46991 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:27:27.264365   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697 ...
	I0812 11:27:27.264425   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 11:27:27.264467   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 11:27:27.494410   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.494252   46991 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa...
	I0812 11:27:27.661779   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.661630   46991 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/kubernetes-upgrade-535697.rawdisk...
	I0812 11:27:27.661813   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Writing magic tar header
	I0812 11:27:27.661843   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Writing SSH key tar header
	I0812 11:27:27.661863   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:27.661764   46991 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697 ...
	I0812 11:27:27.661886   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697
	I0812 11:27:27.661913   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697 (perms=drwx------)
	I0812 11:27:27.661928   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 11:27:27.661950   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:27:27.661963   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 11:27:27.661975   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 11:27:27.661991   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 11:27:27.662007   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home/jenkins
	I0812 11:27:27.662020   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 11:27:27.662032   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Checking permissions on dir: /home
	I0812 11:27:27.662045   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Skipping /home - not owner
	I0812 11:27:27.662103   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 11:27:27.662139   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 11:27:27.662159   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 11:27:27.662175   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Creating domain...
	I0812 11:27:27.663258   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) define libvirt domain using xml: 
	I0812 11:27:27.663280   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) <domain type='kvm'>
	I0812 11:27:27.663294   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <name>kubernetes-upgrade-535697</name>
	I0812 11:27:27.663303   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <memory unit='MiB'>2200</memory>
	I0812 11:27:27.663312   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <vcpu>2</vcpu>
	I0812 11:27:27.663324   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <features>
	I0812 11:27:27.663337   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <acpi/>
	I0812 11:27:27.663349   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <apic/>
	I0812 11:27:27.663372   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <pae/>
	I0812 11:27:27.663391   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     
	I0812 11:27:27.663402   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   </features>
	I0812 11:27:27.663411   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <cpu mode='host-passthrough'>
	I0812 11:27:27.663423   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   
	I0812 11:27:27.663433   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   </cpu>
	I0812 11:27:27.663444   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <os>
	I0812 11:27:27.663459   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <type>hvm</type>
	I0812 11:27:27.663471   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <boot dev='cdrom'/>
	I0812 11:27:27.663483   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <boot dev='hd'/>
	I0812 11:27:27.663494   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <bootmenu enable='no'/>
	I0812 11:27:27.663505   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   </os>
	I0812 11:27:27.663515   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   <devices>
	I0812 11:27:27.663531   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <disk type='file' device='cdrom'>
	I0812 11:27:27.663555   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/boot2docker.iso'/>
	I0812 11:27:27.663567   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <target dev='hdc' bus='scsi'/>
	I0812 11:27:27.663578   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <readonly/>
	I0812 11:27:27.663588   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </disk>
	I0812 11:27:27.663602   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <disk type='file' device='disk'>
	I0812 11:27:27.663621   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 11:27:27.663644   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/kubernetes-upgrade-535697.rawdisk'/>
	I0812 11:27:27.663663   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <target dev='hda' bus='virtio'/>
	I0812 11:27:27.663674   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </disk>
	I0812 11:27:27.663687   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <interface type='network'>
	I0812 11:27:27.663699   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <source network='mk-kubernetes-upgrade-535697'/>
	I0812 11:27:27.663718   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <model type='virtio'/>
	I0812 11:27:27.663733   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </interface>
	I0812 11:27:27.663745   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <interface type='network'>
	I0812 11:27:27.663754   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <source network='default'/>
	I0812 11:27:27.663767   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <model type='virtio'/>
	I0812 11:27:27.663778   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </interface>
	I0812 11:27:27.663790   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <serial type='pty'>
	I0812 11:27:27.663801   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <target port='0'/>
	I0812 11:27:27.663835   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </serial>
	I0812 11:27:27.663862   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <console type='pty'>
	I0812 11:27:27.663874   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <target type='serial' port='0'/>
	I0812 11:27:27.663885   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </console>
	I0812 11:27:27.663911   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     <rng model='virtio'>
	I0812 11:27:27.663931   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)       <backend model='random'>/dev/random</backend>
	I0812 11:27:27.663944   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     </rng>
	I0812 11:27:27.663955   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     
	I0812 11:27:27.663967   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)     
	I0812 11:27:27.663980   46645 main.go:141] libmachine: (kubernetes-upgrade-535697)   </devices>
	I0812 11:27:27.663993   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) </domain>
	I0812 11:27:27.664004   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) 
	I0812 11:27:27.669046   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:2b:67:6c in network default
	I0812 11:27:27.669650   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Ensuring networks are active...
	I0812 11:27:27.669671   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:27.670392   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Ensuring network default is active
	I0812 11:27:27.670718   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Ensuring network mk-kubernetes-upgrade-535697 is active
	I0812 11:27:27.671172   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Getting domain xml...
	I0812 11:27:27.671862   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Creating domain...
	I0812 11:27:28.985972   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Waiting to get IP...
	I0812 11:27:28.986997   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:28.987479   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:28.987507   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:28.987455   46991 retry.go:31] will retry after 219.181648ms: waiting for machine to come up
	I0812 11:27:29.208006   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.208717   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.208746   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:29.208672   46991 retry.go:31] will retry after 341.053001ms: waiting for machine to come up
	I0812 11:27:29.551160   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.551646   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.551678   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:29.551611   46991 retry.go:31] will retry after 317.956255ms: waiting for machine to come up
	I0812 11:27:29.871170   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.871666   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:29.871700   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:29.871636   46991 retry.go:31] will retry after 527.057145ms: waiting for machine to come up
	I0812 11:27:30.400320   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:30.400828   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:30.400884   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:30.400770   46991 retry.go:31] will retry after 539.11368ms: waiting for machine to come up
	I0812 11:27:30.941649   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:30.942177   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:30.942205   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:30.942132   46991 retry.go:31] will retry after 721.323365ms: waiting for machine to come up
	I0812 11:27:31.664962   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:31.665395   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:31.665423   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:31.665343   46991 retry.go:31] will retry after 944.047859ms: waiting for machine to come up
	I0812 11:27:32.610663   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:32.611128   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:32.611154   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:32.611083   46991 retry.go:31] will retry after 1.294253647s: waiting for machine to come up
	I0812 11:27:33.907635   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:33.908174   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:33.908209   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:33.908098   46991 retry.go:31] will retry after 1.567658342s: waiting for machine to come up
	I0812 11:27:35.477087   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:35.477634   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:35.477667   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:35.477577   46991 retry.go:31] will retry after 2.095422214s: waiting for machine to come up
	I0812 11:27:37.575137   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:37.575616   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:37.575645   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:37.575557   46991 retry.go:31] will retry after 2.904367163s: waiting for machine to come up
	I0812 11:27:40.483791   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:40.484379   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:40.484408   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:40.484315   46991 retry.go:31] will retry after 2.993315611s: waiting for machine to come up
	I0812 11:27:43.479561   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:43.479986   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find current IP address of domain kubernetes-upgrade-535697 in network mk-kubernetes-upgrade-535697
	I0812 11:27:43.480012   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | I0812 11:27:43.479934   46991 retry.go:31] will retry after 3.321549751s: waiting for machine to come up
	I0812 11:27:46.805338   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:46.805796   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Found IP for machine: 192.168.50.39
	I0812 11:27:46.805833   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has current primary IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:46.805847   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Reserving static IP address...
	I0812 11:27:46.806178   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-535697", mac: "52:54:00:10:a6:91", ip: "192.168.50.39"} in network mk-kubernetes-upgrade-535697
	I0812 11:27:46.882709   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Getting to WaitForSSH function...
	I0812 11:27:46.882747   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Reserved static IP address: 192.168.50.39
	I0812 11:27:46.882763   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Waiting for SSH to be available...
	I0812 11:27:46.885160   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:46.885565   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:46.885599   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:46.885705   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Using SSH client type: external
	I0812 11:27:46.885728   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa (-rw-------)
	I0812 11:27:46.885770   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:27:46.885789   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | About to run SSH command:
	I0812 11:27:46.885805   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | exit 0
	I0812 11:27:47.004959   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | SSH cmd err, output: <nil>: 
	I0812 11:27:47.005217   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) KVM machine creation complete!
	I0812 11:27:47.005631   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetConfigRaw
	I0812 11:27:47.006166   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:47.006428   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:47.006626   46645 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 11:27:47.006639   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetState
	I0812 11:27:47.007778   46645 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 11:27:47.007791   46645 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 11:27:47.007796   46645 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 11:27:47.007802   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.010146   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.010568   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.010597   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.010740   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.010910   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.011033   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.011129   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.011255   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:47.011448   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:47.011459   46645 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 11:27:47.108195   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:27:47.108233   46645 main.go:141] libmachine: Detecting the provisioner...
	I0812 11:27:47.108240   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.111056   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.111422   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.111444   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.111668   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.111899   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.112052   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.112244   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.112399   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:47.112557   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:47.112568   46645 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 11:27:47.213138   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 11:27:47.213255   46645 main.go:141] libmachine: found compatible host: buildroot
	I0812 11:27:47.213268   46645 main.go:141] libmachine: Provisioning with buildroot...
	I0812 11:27:47.213289   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetMachineName
	I0812 11:27:47.213588   46645 buildroot.go:166] provisioning hostname "kubernetes-upgrade-535697"
	I0812 11:27:47.213616   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetMachineName
	I0812 11:27:47.213861   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.216636   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.216999   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.217040   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.217175   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.217385   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.217541   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.217668   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.217797   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:47.218002   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:47.218019   46645 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-535697 && echo "kubernetes-upgrade-535697" | sudo tee /etc/hostname
	I0812 11:27:47.330920   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-535697
	
	I0812 11:27:47.330956   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.333615   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.334034   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.334056   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.334259   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.334458   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.334640   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.334866   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.335095   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:47.335272   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:47.335288   46645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-535697' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-535697/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-535697' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:27:47.441747   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:27:47.441785   46645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:27:47.441899   46645 buildroot.go:174] setting up certificates
	I0812 11:27:47.441915   46645 provision.go:84] configureAuth start
	I0812 11:27:47.441930   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetMachineName
	I0812 11:27:47.442265   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetIP
	I0812 11:27:47.444704   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.445026   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.445054   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.445168   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.447221   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.447526   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.447555   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.447664   46645 provision.go:143] copyHostCerts
	I0812 11:27:47.447733   46645 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:27:47.447746   46645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:27:47.447816   46645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:27:47.447960   46645 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:27:47.447974   46645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:27:47.448008   46645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:27:47.448125   46645 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:27:47.448135   46645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:27:47.448165   46645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:27:47.448267   46645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-535697 san=[127.0.0.1 192.168.50.39 kubernetes-upgrade-535697 localhost minikube]
	I0812 11:27:47.593425   46645 provision.go:177] copyRemoteCerts
	I0812 11:27:47.593492   46645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:27:47.593518   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.596245   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.596539   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.596572   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.596714   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.596917   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.597081   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.597213   46645 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:27:47.676424   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:27:47.704253   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0812 11:27:47.732749   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 11:27:47.757789   46645 provision.go:87] duration metric: took 315.862517ms to configureAuth
	I0812 11:27:47.757818   46645 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:27:47.758022   46645 config.go:182] Loaded profile config "kubernetes-upgrade-535697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:27:47.758103   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:47.761362   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.761788   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:47.761813   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:47.762071   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:47.762277   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.762478   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:47.762633   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:47.762799   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:47.763014   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:47.763039   46645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:27:48.290140   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:27:48.290174   46645 main.go:141] libmachine: Checking connection to Docker...
	I0812 11:27:48.290186   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetURL
	I0812 11:27:48.291774   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Using libvirt version 6000000
	I0812 11:27:48.294629   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.295076   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.295123   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.295335   46645 main.go:141] libmachine: Docker is up and running!
	I0812 11:27:48.295369   46645 main.go:141] libmachine: Reticulating splines...
	I0812 11:27:48.295381   46645 client.go:171] duration metric: took 21.115674163s to LocalClient.Create
	I0812 11:27:48.295418   46645 start.go:167] duration metric: took 21.115744413s to libmachine.API.Create "kubernetes-upgrade-535697"
	I0812 11:27:48.295431   46645 start.go:293] postStartSetup for "kubernetes-upgrade-535697" (driver="kvm2")
	I0812 11:27:48.295448   46645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:27:48.295471   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:48.295724   46645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:27:48.295749   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:48.298208   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.298546   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.298577   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.298718   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:48.298918   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:48.299116   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:48.299304   46645 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:27:48.383116   46645 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:27:48.387419   46645 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:27:48.387446   46645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:27:48.387519   46645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:27:48.387605   46645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:27:48.387716   46645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:27:48.397787   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:27:48.423598   46645 start.go:296] duration metric: took 128.152233ms for postStartSetup
	I0812 11:27:48.423663   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetConfigRaw
	I0812 11:27:48.481427   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetIP
	I0812 11:27:48.484210   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.484684   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.484725   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.485096   46645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/config.json ...
	I0812 11:27:48.547833   46645 start.go:128] duration metric: took 21.390088076s to createHost
	I0812 11:27:48.547875   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:48.550908   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.551297   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.551323   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.551558   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:48.551773   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:48.551918   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:48.552057   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:48.552191   46645 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:48.552372   46645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0812 11:27:48.552384   46645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 11:27:48.653633   46645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723462068.626634804
	
	I0812 11:27:48.653664   46645 fix.go:216] guest clock: 1723462068.626634804
	I0812 11:27:48.653700   46645 fix.go:229] Guest: 2024-08-12 11:27:48.626634804 +0000 UTC Remote: 2024-08-12 11:27:48.547857139 +0000 UTC m=+45.718964201 (delta=78.777665ms)
	I0812 11:27:48.653733   46645 fix.go:200] guest clock delta is within tolerance: 78.777665ms
	I0812 11:27:48.653741   46645 start.go:83] releasing machines lock for "kubernetes-upgrade-535697", held for 21.496172187s
	I0812 11:27:48.653825   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:48.654124   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetIP
	I0812 11:27:48.657238   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.657782   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.657818   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.657968   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:48.658621   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:48.658844   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:27:48.658941   46645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:27:48.659004   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:48.659185   46645 ssh_runner.go:195] Run: cat /version.json
	I0812 11:27:48.659210   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:27:48.662203   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.662233   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.662607   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.662634   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.662661   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:48.662686   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:48.662908   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:48.662977   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:27:48.663078   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:48.663187   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:48.663269   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:27:48.663319   46645 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:27:48.663553   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:27:48.663817   46645 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:27:48.771663   46645 ssh_runner.go:195] Run: systemctl --version
	I0812 11:27:48.777522   46645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:27:48.959629   46645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:27:48.967658   46645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:27:48.967737   46645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:27:48.988679   46645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:27:48.988704   46645 start.go:495] detecting cgroup driver to use...
	I0812 11:27:48.988762   46645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:27:49.012105   46645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:27:49.028663   46645 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:27:49.028748   46645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:27:49.047166   46645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:27:49.061841   46645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:27:49.189756   46645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:27:49.341282   46645 docker.go:233] disabling docker service ...
	I0812 11:27:49.341345   46645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:27:49.358158   46645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:27:49.371486   46645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:27:49.521323   46645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:27:49.658386   46645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:27:49.672694   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:27:49.694204   46645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0812 11:27:49.694269   46645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:49.704386   46645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:27:49.704459   46645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:49.716905   46645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:49.727635   46645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:49.737620   46645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:27:49.748006   46645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:27:49.757578   46645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:27:49.757643   46645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:27:49.771528   46645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:27:49.781981   46645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:27:49.901716   46645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:27:50.050416   46645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:27:50.050497   46645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:27:50.055636   46645 start.go:563] Will wait 60s for crictl version
	I0812 11:27:50.055699   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:50.059921   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:27:50.101525   46645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:27:50.101634   46645 ssh_runner.go:195] Run: crio --version
	I0812 11:27:50.133257   46645 ssh_runner.go:195] Run: crio --version
	I0812 11:27:50.164623   46645 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0812 11:27:50.165930   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetIP
	I0812 11:27:50.169427   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:50.169891   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:27:41 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:27:50.169936   46645 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:27:50.170137   46645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 11:27:50.175103   46645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:27:50.190375   46645 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-535697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-535697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:27:50.190539   46645 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:27:50.190594   46645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:27:50.232061   46645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:27:50.232132   46645 ssh_runner.go:195] Run: which lz4
	I0812 11:27:50.236827   46645 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 11:27:50.241789   46645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:27:50.241832   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0812 11:27:51.782040   46645 crio.go:462] duration metric: took 1.545258224s to copy over tarball
	I0812 11:27:51.782116   46645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:27:54.503316   46645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.721170387s)
	I0812 11:27:54.503354   46645 crio.go:469] duration metric: took 2.721284975s to extract the tarball
	I0812 11:27:54.503369   46645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:27:54.547724   46645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:27:54.598181   46645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:27:54.598208   46645 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 11:27:54.598285   46645 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:27:54.598314   46645 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:54.598352   46645 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:54.598324   46645 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0812 11:27:54.598400   46645 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:54.598459   46645 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:54.598314   46645 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:54.598330   46645 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0812 11:27:54.601349   46645 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:54.600989   46645 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:54.601386   46645 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:27:54.601459   46645 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0812 11:27:54.601479   46645 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:54.601560   46645 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:54.602148   46645 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:54.602240   46645 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0812 11:27:54.823500   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0812 11:27:54.848261   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:54.854788   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0812 11:27:54.865990   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:54.866983   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:54.871094   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:54.873804   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:54.873857   46645 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0812 11:27:54.873897   46645 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0812 11:27:54.873928   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:54.982270   46645 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0812 11:27:54.982319   46645 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0812 11:27:54.982289   46645 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0812 11:27:54.982365   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:54.982390   46645 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:54.982436   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:55.012922   46645 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0812 11:27:55.012968   46645 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:55.012972   46645 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0812 11:27:55.013003   46645 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:55.013017   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:55.013043   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:55.013038   46645 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0812 11:27:55.013062   46645 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:55.013082   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:27:55.013095   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:55.013106   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:55.013010   46645 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0812 11:27:55.013127   46645 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:55.013153   46645 ssh_runner.go:195] Run: which crictl
	I0812 11:27:55.013131   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:27:55.073359   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:55.073400   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:27:55.073440   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:55.091334   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:27:55.091405   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:55.091416   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:55.091337   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:55.227650   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:55.227744   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:55.227708   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:27:55.230826   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:55.230943   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:27:55.231083   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:27:55.246454   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:55.374492   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:27:55.374554   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0812 11:27:55.374667   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:27:55.374667   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:27:55.383124   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0812 11:27:55.383141   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0812 11:27:55.397061   46645 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:27:55.443750   46645 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:27:55.480344   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0812 11:27:55.480459   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0812 11:27:55.480459   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0812 11:27:55.500163   46645 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0812 11:27:55.627506   46645 cache_images.go:92] duration metric: took 1.029278119s to LoadCachedImages
	W0812 11:27:55.627613   46645 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0812 11:27:55.627633   46645 kubeadm.go:934] updating node { 192.168.50.39 8443 v1.20.0 crio true true} ...
	I0812 11:27:55.627776   46645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-535697 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-535697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:27:55.627850   46645 ssh_runner.go:195] Run: crio config
	I0812 11:27:55.679154   46645 cni.go:84] Creating CNI manager for ""
	I0812 11:27:55.679181   46645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:27:55.679191   46645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:27:55.679210   46645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-535697 NodeName:kubernetes-upgrade-535697 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0812 11:27:55.679374   46645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-535697"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:27:55.679448   46645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0812 11:27:55.690089   46645 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:27:55.690171   46645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:27:55.700244   46645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0812 11:27:55.719991   46645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:27:55.739133   46645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0812 11:27:55.758629   46645 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I0812 11:27:55.762418   46645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:27:55.775259   46645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:27:55.898422   46645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:27:55.916007   46645 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697 for IP: 192.168.50.39
	I0812 11:27:55.916029   46645 certs.go:194] generating shared ca certs ...
	I0812 11:27:55.916045   46645 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:55.916209   46645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:27:55.916266   46645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:27:55.916279   46645 certs.go:256] generating profile certs ...
	I0812 11:27:55.916338   46645 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.key
	I0812 11:27:55.916366   46645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.crt with IP's: []
	I0812 11:27:56.091051   46645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.crt ...
	I0812 11:27:56.091077   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.crt: {Name:mk322505630f2a622e4d8306fdfb47d63d5c7810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.091273   46645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.key ...
	I0812 11:27:56.091291   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.key: {Name:mk0c2c45fb41159ff965d2a75f1767dd609f8115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.091405   46645 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key.b50b723e
	I0812 11:27:56.091422   46645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt.b50b723e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.39]
	I0812 11:27:56.346050   46645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt.b50b723e ...
	I0812 11:27:56.346079   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt.b50b723e: {Name:mk7ec761ab933b7a296d2ecdd1afe3a0b841cfe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.346230   46645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key.b50b723e ...
	I0812 11:27:56.346243   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key.b50b723e: {Name:mk21bbe30ec381e729e3f47a08182b982201d230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.346311   46645 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt.b50b723e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt
	I0812 11:27:56.346417   46645 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key.b50b723e -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key
	I0812 11:27:56.346474   46645 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.key
	I0812 11:27:56.346489   46645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.crt with IP's: []
	I0812 11:27:56.437255   46645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.crt ...
	I0812 11:27:56.437287   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.crt: {Name:mkd84752b6e5ce3f138af508c732a11c216ca626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.437443   46645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.key ...
	I0812 11:27:56.437457   46645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.key: {Name:mk514c0916b93de2d3dae52b4ace094a51047574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:56.437655   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:27:56.437690   46645 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:27:56.437699   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:27:56.437726   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:27:56.437756   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:27:56.437784   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:27:56.437838   46645 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:27:56.438471   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:27:56.464985   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:27:56.494457   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:27:56.519550   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:27:56.543851   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 11:27:56.568349   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:27:56.596243   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:27:56.623378   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 11:27:56.646753   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:27:56.672793   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:27:56.699722   46645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:27:56.737124   46645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:27:56.756334   46645 ssh_runner.go:195] Run: openssl version
	I0812 11:27:56.766467   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:27:56.781349   46645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:56.789428   46645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:56.789503   46645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:56.795851   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:27:56.818959   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:27:56.832949   46645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:27:56.837745   46645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:27:56.837807   46645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:27:56.843422   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:27:56.854275   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:27:56.865570   46645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:27:56.870297   46645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:27:56.870387   46645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:27:56.876108   46645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:27:56.886667   46645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:27:56.890985   46645 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 11:27:56.891043   46645 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-535697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-535697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:27:56.891123   46645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:27:56.891197   46645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:27:56.928324   46645 cri.go:89] found id: ""
	I0812 11:27:56.928404   46645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:27:56.938650   46645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:27:56.949018   46645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:27:56.959246   46645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:27:56.959267   46645 kubeadm.go:157] found existing configuration files:
	
	I0812 11:27:56.959318   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:27:56.969333   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:27:56.969422   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:27:56.982647   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:27:56.994660   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:27:56.994740   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:27:57.005114   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:27:57.014575   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:27:57.014635   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:27:57.024820   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:27:57.034171   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:27:57.034242   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:27:57.043885   46645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:27:57.175991   46645 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:27:57.176237   46645 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:27:57.331888   46645 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:27:57.332036   46645 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:27:57.332206   46645 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:27:57.516352   46645 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:27:57.684742   46645 out.go:204]   - Generating certificates and keys ...
	I0812 11:27:57.684914   46645 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:27:57.685006   46645 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:27:57.726136   46645 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 11:27:58.034371   46645 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 11:27:58.115684   46645 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 11:27:58.354048   46645 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 11:27:58.652195   46645 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 11:27:58.652515   46645 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I0812 11:27:58.722351   46645 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 11:27:58.722534   46645 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I0812 11:27:58.800057   46645 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 11:27:59.063906   46645 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 11:27:59.203700   46645 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 11:27:59.204000   46645 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:27:59.370599   46645 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:27:59.878032   46645 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:28:00.072876   46645 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:28:00.237851   46645 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:28:00.257088   46645 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:28:00.258397   46645 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:28:00.258478   46645 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:28:00.410751   46645 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:28:00.412597   46645 out.go:204]   - Booting up control plane ...
	I0812 11:28:00.412745   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:28:00.422952   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:28:00.424204   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:28:00.425164   46645 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:28:00.431307   46645 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:28:40.423475   46645 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:28:40.423607   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:28:40.423880   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:28:45.424564   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:28:45.424771   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:28:55.424228   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:28:55.424566   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:29:15.423703   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:29:15.424028   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:29:55.425273   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:29:55.425526   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:29:55.425539   46645 kubeadm.go:310] 
	I0812 11:29:55.425597   46645 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:29:55.425661   46645 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:29:55.425699   46645 kubeadm.go:310] 
	I0812 11:29:55.425774   46645 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:29:55.425875   46645 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:29:55.426022   46645 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:29:55.426032   46645 kubeadm.go:310] 
	I0812 11:29:55.426185   46645 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:29:55.426239   46645 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:29:55.426294   46645 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:29:55.426304   46645 kubeadm.go:310] 
	I0812 11:29:55.426445   46645 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:29:55.426561   46645 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:29:55.426575   46645 kubeadm.go:310] 
	I0812 11:29:55.426716   46645 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:29:55.426801   46645 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:29:55.426890   46645 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:29:55.426996   46645 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:29:55.427013   46645 kubeadm.go:310] 
	I0812 11:29:55.427491   46645 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:29:55.427571   46645 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:29:55.427627   46645 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0812 11:29:55.427752   46645 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-535697 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:29:55.427794   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:29:55.965490   46645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:29:55.979366   46645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:29:55.988565   46645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:29:55.988591   46645 kubeadm.go:157] found existing configuration files:
	
	I0812 11:29:55.988657   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:29:55.997597   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:29:55.997663   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:29:56.006574   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:29:56.015288   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:29:56.015360   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:29:56.024559   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:29:56.033868   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:29:56.033932   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:29:56.043453   46645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:29:56.052364   46645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:29:56.052439   46645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:29:56.062000   46645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:29:56.261582   46645 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:31:52.407839   46645 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:31:52.407969   46645 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:31:52.410052   46645 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:31:52.410116   46645 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:31:52.410202   46645 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:31:52.410320   46645 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:31:52.410448   46645 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:31:52.410547   46645 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:31:52.412824   46645 out.go:204]   - Generating certificates and keys ...
	I0812 11:31:52.412941   46645 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:31:52.413024   46645 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:31:52.413141   46645 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:31:52.413204   46645 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:31:52.413263   46645 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:31:52.413312   46645 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:31:52.413373   46645 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:31:52.413443   46645 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:31:52.413523   46645 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:31:52.413610   46645 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:31:52.413653   46645 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:31:52.413724   46645 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:31:52.413808   46645 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:31:52.413884   46645 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:31:52.413974   46645 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:31:52.414046   46645 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:31:52.414183   46645 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:31:52.414286   46645 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:31:52.414330   46645 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:31:52.414429   46645 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:31:52.417555   46645 out.go:204]   - Booting up control plane ...
	I0812 11:31:52.417647   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:31:52.417730   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:31:52.417805   46645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:31:52.417892   46645 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:31:52.418046   46645 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:31:52.418111   46645 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:31:52.418170   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:31:52.418381   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:31:52.418468   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:31:52.418679   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:31:52.418772   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:31:52.418965   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:31:52.419067   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:31:52.419262   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:31:52.419341   46645 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:31:52.419495   46645 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:31:52.419502   46645 kubeadm.go:310] 
	I0812 11:31:52.419542   46645 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:31:52.419581   46645 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:31:52.419588   46645 kubeadm.go:310] 
	I0812 11:31:52.419617   46645 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:31:52.419657   46645 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:31:52.419765   46645 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:31:52.419776   46645 kubeadm.go:310] 
	I0812 11:31:52.419892   46645 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:31:52.419936   46645 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:31:52.419979   46645 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:31:52.419989   46645 kubeadm.go:310] 
	I0812 11:31:52.420091   46645 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:31:52.420159   46645 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:31:52.420166   46645 kubeadm.go:310] 
	I0812 11:31:52.420262   46645 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:31:52.420341   46645 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:31:52.420443   46645 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:31:52.420548   46645 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:31:52.420591   46645 kubeadm.go:310] 
	I0812 11:31:52.420625   46645 kubeadm.go:394] duration metric: took 3m55.529585779s to StartCluster
	I0812 11:31:52.420690   46645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:31:52.420757   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:31:52.464161   46645 cri.go:89] found id: ""
	I0812 11:31:52.464189   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.464197   46645 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:31:52.464204   46645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:31:52.464265   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:31:52.503406   46645 cri.go:89] found id: ""
	I0812 11:31:52.503437   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.503449   46645 logs.go:278] No container was found matching "etcd"
	I0812 11:31:52.503457   46645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:31:52.503530   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:31:52.545571   46645 cri.go:89] found id: ""
	I0812 11:31:52.545606   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.545617   46645 logs.go:278] No container was found matching "coredns"
	I0812 11:31:52.545625   46645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:31:52.545693   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:31:52.590923   46645 cri.go:89] found id: ""
	I0812 11:31:52.590956   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.590968   46645 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:31:52.590976   46645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:31:52.591055   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:31:52.635567   46645 cri.go:89] found id: ""
	I0812 11:31:52.635592   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.635603   46645 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:31:52.635611   46645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:31:52.635687   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:31:52.681059   46645 cri.go:89] found id: ""
	I0812 11:31:52.681088   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.681098   46645 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:31:52.681104   46645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:31:52.681166   46645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:31:52.729738   46645 cri.go:89] found id: ""
	I0812 11:31:52.729775   46645 logs.go:276] 0 containers: []
	W0812 11:31:52.729788   46645 logs.go:278] No container was found matching "kindnet"
	I0812 11:31:52.729800   46645 logs.go:123] Gathering logs for kubelet ...
	I0812 11:31:52.729817   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:31:52.790016   46645 logs.go:123] Gathering logs for dmesg ...
	I0812 11:31:52.790064   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:31:52.803804   46645 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:31:52.803838   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:31:52.940951   46645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:31:52.940984   46645 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:31:52.940999   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:31:53.052916   46645 logs.go:123] Gathering logs for container status ...
	I0812 11:31:53.052953   46645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:31:53.094219   46645 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:31:53.094273   46645 out.go:239] * 
	* 
	W0812 11:31:53.094339   46645 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:31:53.094371   46645 out.go:239] * 
	* 
	W0812 11:31:53.095242   46645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:31:53.098241   46645 out.go:177] 
	W0812 11:31:53.099240   46645 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:31:53.099280   46645 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:31:53.099301   46645 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:31:53.100567   46645 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-535697
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-535697: (2.290680257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-535697 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-535697 status --format={{.Host}}: exit status 7 (68.240405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.966184123s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-535697 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.962695ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-535697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-535697
	    minikube start -p kubernetes-upgrade-535697 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5356972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-535697 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0812 11:33:30.975319   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-535697 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m30.681436744s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-12 11:39:06.307239666 +0000 UTC m=+4735.464162238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-535697 -n kubernetes-upgrade-535697
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-535697 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-967682                                 | cert-options-967682          | jenkins | v1.33.1 | 12 Aug 24 11:32 UTC | 12 Aug 24 11:32 UTC |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:32 UTC | 12 Aug 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:33 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:39:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:39:04.267946   57198 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:39:04.268232   57198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:39:04.268243   57198 out.go:304] Setting ErrFile to fd 2...
	I0812 11:39:04.268248   57198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:39:04.268506   57198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:39:04.269124   57198 out.go:298] Setting JSON to false
	I0812 11:39:04.270163   57198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4885,"bootTime":1723457859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:39:04.270225   57198 start.go:139] virtualization: kvm guest
	I0812 11:39:04.272642   57198 out.go:177] * [old-k8s-version-835962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:39:04.274125   57198 notify.go:220] Checking for updates...
	I0812 11:39:04.274170   57198 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:39:04.275658   57198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:39:04.277167   57198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:39:04.278719   57198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:39:04.280232   57198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:39:04.281947   57198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:39:04.284055   57198 config.go:182] Loaded profile config "old-k8s-version-835962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:39:04.284518   57198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:04.284613   57198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:04.301959   57198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0812 11:39:04.302418   57198 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:04.303050   57198 main.go:141] libmachine: Using API Version  1
	I0812 11:39:04.303082   57198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:04.303461   57198 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:04.303656   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:39:04.305580   57198 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 11:39:04.306948   57198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:39:04.307390   57198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:04.307430   57198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:04.322711   57198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0812 11:39:04.323143   57198 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:04.323639   57198 main.go:141] libmachine: Using API Version  1
	I0812 11:39:04.323659   57198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:04.324008   57198 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:04.324187   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:39:04.366911   57198 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:39:04.368545   57198 start.go:297] selected driver: kvm2
	I0812 11:39:04.368561   57198 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:39:04.368699   57198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:39:04.369437   57198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:39:04.369524   57198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:39:04.385687   57198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:39:04.386094   57198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:39:04.386121   57198 cni.go:84] Creating CNI manager for ""
	I0812 11:39:04.386133   57198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:39:04.386187   57198 start.go:340] cluster config:
	{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:39:04.386334   57198 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:39:04.389877   57198 out.go:177] * Starting "old-k8s-version-835962" primary control-plane node in "old-k8s-version-835962" cluster
	I0812 11:39:05.318876   53820 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:39:05.318955   53820 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:39:05.319032   53820 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:39:05.319125   53820 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:39:05.319207   53820 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:39:05.319269   53820 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:39:05.320914   53820 out.go:204]   - Generating certificates and keys ...
	I0812 11:39:05.320988   53820 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:39:05.321043   53820 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:39:05.321126   53820 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:39:05.321205   53820 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:39:05.321267   53820 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:39:05.321357   53820 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:39:05.321455   53820 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:39:05.321550   53820 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:39:05.321658   53820 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:39:05.321744   53820 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:39:05.321805   53820 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:39:05.321888   53820 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:39:05.321967   53820 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:39:05.322063   53820 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:39:05.322162   53820 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:39:05.322251   53820 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:39:05.322308   53820 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:39:05.322383   53820 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:39:05.322455   53820 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:39:05.323966   53820 out.go:204]   - Booting up control plane ...
	I0812 11:39:05.324067   53820 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:39:05.324167   53820 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:39:05.324261   53820 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:39:05.324369   53820 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:39:05.324490   53820 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:39:05.324551   53820 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:39:05.324719   53820 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:39:05.324845   53820 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:39:05.324963   53820 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532868ms
	I0812 11:39:05.325073   53820 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:39:05.325131   53820 kubeadm.go:310] [api-check] The API server is healthy after 4.502187032s
	I0812 11:39:05.325241   53820 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:39:05.325370   53820 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:39:05.325456   53820 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:39:05.325674   53820 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-535697 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:39:05.325730   53820 kubeadm.go:310] [bootstrap-token] Using token: pybe0k.f0sie35j69iu0dr5
	I0812 11:39:05.327180   53820 out.go:204]   - Configuring RBAC rules ...
	I0812 11:39:05.327297   53820 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:39:05.327392   53820 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:39:05.327531   53820 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:39:05.327653   53820 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:39:05.327781   53820 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:39:05.327891   53820 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:39:05.328020   53820 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:39:05.328079   53820 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:39:05.328146   53820 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:39:05.328160   53820 kubeadm.go:310] 
	I0812 11:39:05.328213   53820 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:39:05.328219   53820 kubeadm.go:310] 
	I0812 11:39:05.328302   53820 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:39:05.328312   53820 kubeadm.go:310] 
	I0812 11:39:05.328337   53820 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:39:05.328412   53820 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:39:05.328464   53820 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:39:05.328476   53820 kubeadm.go:310] 
	I0812 11:39:05.328556   53820 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:39:05.328563   53820 kubeadm.go:310] 
	I0812 11:39:05.328619   53820 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:39:05.328629   53820 kubeadm.go:310] 
	I0812 11:39:05.328704   53820 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:39:05.328801   53820 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:39:05.328926   53820 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:39:05.328935   53820 kubeadm.go:310] 
	I0812 11:39:05.329018   53820 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:39:05.329082   53820 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:39:05.329088   53820 kubeadm.go:310] 
	I0812 11:39:05.329177   53820 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pybe0k.f0sie35j69iu0dr5 \
	I0812 11:39:05.329303   53820 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:39:05.329333   53820 kubeadm.go:310] 	--control-plane 
	I0812 11:39:05.329344   53820 kubeadm.go:310] 
	I0812 11:39:05.329442   53820 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:39:05.329451   53820 kubeadm.go:310] 
	I0812 11:39:05.329551   53820 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pybe0k.f0sie35j69iu0dr5 \
	I0812 11:39:05.329682   53820 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:39:05.329695   53820 cni.go:84] Creating CNI manager for ""
	I0812 11:39:05.329703   53820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:39:05.331571   53820 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:39:05.333075   53820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:39:05.345282   53820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:39:05.365576   53820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:39:05.365668   53820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:39:05.365703   53820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-535697 minikube.k8s.io/updated_at=2024_08_12T11_39_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=kubernetes-upgrade-535697 minikube.k8s.io/primary=true
	I0812 11:39:05.474615   53820 ops.go:34] apiserver oom_adj: -16
	I0812 11:39:05.488949   53820 kubeadm.go:1113] duration metric: took 123.347642ms to wait for elevateKubeSystemPrivileges
	I0812 11:39:05.488982   53820 kubeadm.go:394] duration metric: took 4m11.819321784s to StartCluster
	I0812 11:39:05.488999   53820 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:39:05.489074   53820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:39:05.490446   53820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:39:05.490699   53820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:39:05.490733   53820 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:39:05.490813   53820 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-535697"
	I0812 11:39:05.490835   53820 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-535697"
	I0812 11:39:05.490850   53820 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-535697"
	W0812 11:39:05.490860   53820 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:39:05.490871   53820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-535697"
	I0812 11:39:05.490884   53820 config.go:182] Loaded profile config "kubernetes-upgrade-535697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:39:05.490890   53820 host.go:66] Checking if "kubernetes-upgrade-535697" exists ...
	I0812 11:39:05.491345   53820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:05.491379   53820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:05.491354   53820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:05.491489   53820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:05.492786   53820 out.go:177] * Verifying Kubernetes components...
	I0812 11:39:05.494246   53820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:39:05.506930   53820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0812 11:39:05.507240   53820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0812 11:39:05.507484   53820 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:05.507664   53820 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:05.508052   53820 main.go:141] libmachine: Using API Version  1
	I0812 11:39:05.508072   53820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:05.508175   53820 main.go:141] libmachine: Using API Version  1
	I0812 11:39:05.508195   53820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:05.508404   53820 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:05.508507   53820 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:05.508575   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetState
	I0812 11:39:05.509039   53820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:05.509070   53820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:05.511190   53820 kapi.go:59] client config for kubernetes-upgrade-535697: &rest.Config{Host:"https://192.168.50.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.crt", KeyFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kubernetes-upgrade-535697/client.key", CAFile:"/home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 11:39:05.511459   53820 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-535697"
	W0812 11:39:05.511474   53820 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:39:05.511501   53820 host.go:66] Checking if "kubernetes-upgrade-535697" exists ...
	I0812 11:39:05.511769   53820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:05.511791   53820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:05.525376   53820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I0812 11:39:05.525808   53820 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:05.525995   53820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I0812 11:39:05.526320   53820 main.go:141] libmachine: Using API Version  1
	I0812 11:39:05.526349   53820 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:05.526370   53820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:05.526735   53820 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:05.526811   53820 main.go:141] libmachine: Using API Version  1
	I0812 11:39:05.526832   53820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:05.526944   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetState
	I0812 11:39:05.527149   53820 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:05.527584   53820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:05.527612   53820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:05.528824   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:39:05.531205   53820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:39:05.532692   53820 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:39:05.532707   53820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:39:05.532722   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:39:05.535898   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:39:05.536399   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:32:13 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:39:05.536418   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:39:05.536527   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:39:05.536755   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:39:05.536936   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:39:05.537073   53820 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:39:05.544441   53820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0812 11:39:05.544894   53820 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:05.545452   53820 main.go:141] libmachine: Using API Version  1
	I0812 11:39:05.545469   53820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:05.545787   53820 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:05.546003   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetState
	I0812 11:39:05.547757   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .DriverName
	I0812 11:39:05.548040   53820 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:39:05.548067   53820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:39:05.548094   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHHostname
	I0812 11:39:05.550535   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:39:05.550890   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:a6:91", ip: ""} in network mk-kubernetes-upgrade-535697: {Iface:virbr2 ExpiryTime:2024-08-12 12:32:13 +0000 UTC Type:0 Mac:52:54:00:10:a6:91 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-535697 Clientid:01:52:54:00:10:a6:91}
	I0812 11:39:05.550918   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | domain kubernetes-upgrade-535697 has defined IP address 192.168.50.39 and MAC address 52:54:00:10:a6:91 in network mk-kubernetes-upgrade-535697
	I0812 11:39:05.551054   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHPort
	I0812 11:39:05.551220   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHKeyPath
	I0812 11:39:05.551389   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .GetSSHUsername
	I0812 11:39:05.551523   53820 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kubernetes-upgrade-535697/id_rsa Username:docker}
	I0812 11:39:05.688069   53820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:39:05.715410   53820 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:39:05.715538   53820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:39:05.736289   53820 api_server.go:72] duration metric: took 245.548948ms to wait for apiserver process to appear ...
	I0812 11:39:05.736318   53820 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:39:05.736354   53820 api_server.go:253] Checking apiserver healthz at https://192.168.50.39:8443/healthz ...
	I0812 11:39:05.742193   53820 api_server.go:279] https://192.168.50.39:8443/healthz returned 200:
	ok
	I0812 11:39:05.752005   53820 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:39:05.752031   53820 api_server.go:131] duration metric: took 15.706207ms to wait for apiserver health ...
	I0812 11:39:05.752038   53820 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:39:05.759821   53820 system_pods.go:59] 4 kube-system pods found
	I0812 11:39:05.759869   53820 system_pods.go:61] "etcd-kubernetes-upgrade-535697" [9bceb57e-fd5b-4a52-94cb-f73f57af1d95] Running
	I0812 11:39:05.759883   53820 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-535697" [52c95846-69a1-4ab4-bbe8-a72691040716] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 11:39:05.759895   53820 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-535697" [3f15553d-7121-4d87-adb5-b60bf02e7e09] Running
	I0812 11:39:05.759913   53820 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-535697" [e04a6a2f-78e9-40f9-9fd1-7db59b48e589] Running
	I0812 11:39:05.759921   53820 system_pods.go:74] duration metric: took 7.876954ms to wait for pod list to return data ...
	I0812 11:39:05.759934   53820 kubeadm.go:582] duration metric: took 269.200051ms to wait for: map[apiserver:true system_pods:true]
	I0812 11:39:05.759953   53820 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:39:05.770042   53820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:39:05.770072   53820 node_conditions.go:123] node cpu capacity is 2
	I0812 11:39:05.770088   53820 node_conditions.go:105] duration metric: took 10.127836ms to run NodePressure ...
	I0812 11:39:05.770102   53820 start.go:241] waiting for startup goroutines ...
	I0812 11:39:05.818128   53820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:39:05.838711   53820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:39:06.229834   53820 main.go:141] libmachine: Making call to close driver server
	I0812 11:39:06.229857   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Close
	I0812 11:39:06.229887   53820 main.go:141] libmachine: Making call to close driver server
	I0812 11:39:06.229908   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Close
	I0812 11:39:06.230169   53820 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:39:06.230200   53820 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:39:06.230207   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Closing plugin on server side
	I0812 11:39:06.230210   53820 main.go:141] libmachine: Making call to close driver server
	I0812 11:39:06.230224   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Close
	I0812 11:39:06.230250   53820 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:39:06.230267   53820 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:39:06.230286   53820 main.go:141] libmachine: Making call to close driver server
	I0812 11:39:06.230298   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Close
	I0812 11:39:06.230457   53820 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:39:06.230471   53820 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:39:06.230488   53820 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:39:06.230507   53820 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:39:06.230521   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Closing plugin on server side
	I0812 11:39:06.237890   53820 main.go:141] libmachine: Making call to close driver server
	I0812 11:39:06.237913   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) Calling .Close
	I0812 11:39:06.238183   53820 main.go:141] libmachine: (kubernetes-upgrade-535697) DBG | Closing plugin on server side
	I0812 11:39:06.238194   53820 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:39:06.238207   53820 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:39:06.240167   53820 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 11:39:06.241564   53820 addons.go:510] duration metric: took 750.83288ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 11:39:06.241604   53820 start.go:246] waiting for cluster config update ...
	I0812 11:39:06.241618   53820 start.go:255] writing updated cluster config ...
	I0812 11:39:06.241881   53820 ssh_runner.go:195] Run: rm -f paused
	I0812 11:39:06.290209   53820 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:39:06.292245   53820 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-535697" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.903855102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462746903827042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78b81c51-b723-4b9c-9021-60ff4f95efd5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.904356361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84f4a3bc-11ee-460c-a4f1-550ed41bd7ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.904423000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84f4a3bc-11ee-460c-a4f1-550ed41bd7ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.904633365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f59fb817b1b6893cebef854497be9af0905693aac93ea6544b2727c64b8410e2,PodSandboxId:a78b868fa2aaa3db87158630a4fb39bf90543d594b332154905bfa0a4554cc2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723462739801594470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 942ca8c7598091716fbee15e4fb0b024,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96803c12bd99fa6bd82b749a5be38aeadc3b95d111bb18a880c6de4f3d52b,PodSandboxId:42afe5f9c960987975bfe8ca637a4516b4c1c2f50a5337ea63eff3d0dba22794,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723462739738378077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 050939a3421e4dbf0d64dbdab1a87eea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1db6d5f2b59f868bdf3a607b90e20a8b1c3eeedc40e82ed2f99aca04dfe3b35,PodSandboxId:4b71f22c1e2bcc34d9b83aedc15aa39f02ba57019dd41b8c0464d9e278f607a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723462739732954083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0d9ac8bb2a53c92b98cac1eb046590d,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836e90128b6673ef5b8b51c5a9731111750be32cc667eb98b2fda3493fb2a904,PodSandboxId:41ae1e9bddc69e53d36f259dcda505bfe3fc20eb20ea68b6af1bb761f40b7e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723462739658770127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0f5a7d374a17348fe72e2d6e16cba1,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84f4a3bc-11ee-460c-a4f1-550ed41bd7ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.937765547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f605a69-3680-41b6-bddd-901df859428f name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.938006202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f605a69-3680-41b6-bddd-901df859428f name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.945738444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce9be33f-1e00-479d-b7d3-95f57a8996e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.946393558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462746946361144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce9be33f-1e00-479d-b7d3-95f57a8996e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.946997504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=468e17eb-9871-4f82-9627-cef68afe46ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.948662729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=468e17eb-9871-4f82-9627-cef68afe46ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.948903280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f59fb817b1b6893cebef854497be9af0905693aac93ea6544b2727c64b8410e2,PodSandboxId:a78b868fa2aaa3db87158630a4fb39bf90543d594b332154905bfa0a4554cc2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723462739801594470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 942ca8c7598091716fbee15e4fb0b024,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96803c12bd99fa6bd82b749a5be38aeadc3b95d111bb18a880c6de4f3d52b,PodSandboxId:42afe5f9c960987975bfe8ca637a4516b4c1c2f50a5337ea63eff3d0dba22794,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723462739738378077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 050939a3421e4dbf0d64dbdab1a87eea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1db6d5f2b59f868bdf3a607b90e20a8b1c3eeedc40e82ed2f99aca04dfe3b35,PodSandboxId:4b71f22c1e2bcc34d9b83aedc15aa39f02ba57019dd41b8c0464d9e278f607a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723462739732954083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0d9ac8bb2a53c92b98cac1eb046590d,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836e90128b6673ef5b8b51c5a9731111750be32cc667eb98b2fda3493fb2a904,PodSandboxId:41ae1e9bddc69e53d36f259dcda505bfe3fc20eb20ea68b6af1bb761f40b7e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723462739658770127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0f5a7d374a17348fe72e2d6e16cba1,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=468e17eb-9871-4f82-9627-cef68afe46ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.988609422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad577b9a-3b0f-49be-ac93-3380262886fa name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.988738506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad577b9a-3b0f-49be-ac93-3380262886fa name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.990250557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39585e35-fd9e-4ec3-bbe5-4d7a7bb33d65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.990740723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462746990710927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39585e35-fd9e-4ec3-bbe5-4d7a7bb33d65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.991434377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc2de8ed-2dfb-4c69-9479-6dcfd8cd5136 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.991549772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc2de8ed-2dfb-4c69-9479-6dcfd8cd5136 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:06 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:06.991677188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f59fb817b1b6893cebef854497be9af0905693aac93ea6544b2727c64b8410e2,PodSandboxId:a78b868fa2aaa3db87158630a4fb39bf90543d594b332154905bfa0a4554cc2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723462739801594470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 942ca8c7598091716fbee15e4fb0b024,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96803c12bd99fa6bd82b749a5be38aeadc3b95d111bb18a880c6de4f3d52b,PodSandboxId:42afe5f9c960987975bfe8ca637a4516b4c1c2f50a5337ea63eff3d0dba22794,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723462739738378077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 050939a3421e4dbf0d64dbdab1a87eea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1db6d5f2b59f868bdf3a607b90e20a8b1c3eeedc40e82ed2f99aca04dfe3b35,PodSandboxId:4b71f22c1e2bcc34d9b83aedc15aa39f02ba57019dd41b8c0464d9e278f607a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723462739732954083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0d9ac8bb2a53c92b98cac1eb046590d,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836e90128b6673ef5b8b51c5a9731111750be32cc667eb98b2fda3493fb2a904,PodSandboxId:41ae1e9bddc69e53d36f259dcda505bfe3fc20eb20ea68b6af1bb761f40b7e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723462739658770127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0f5a7d374a17348fe72e2d6e16cba1,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc2de8ed-2dfb-4c69-9479-6dcfd8cd5136 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.031034441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67570187-5742-4618-9bcf-cfbb73733557 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.031128116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67570187-5742-4618-9bcf-cfbb73733557 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.032138966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f189898-cb4d-4a75-8c9e-5b047fa39a90 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.032568160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462747032542962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f189898-cb4d-4a75-8c9e-5b047fa39a90 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.033062210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80d470de-7095-402a-85d5-410833aed761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.033128441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80d470de-7095-402a-85d5-410833aed761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:39:07 kubernetes-upgrade-535697 crio[2952]: time="2024-08-12 11:39:07.033251073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f59fb817b1b6893cebef854497be9af0905693aac93ea6544b2727c64b8410e2,PodSandboxId:a78b868fa2aaa3db87158630a4fb39bf90543d594b332154905bfa0a4554cc2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723462739801594470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 942ca8c7598091716fbee15e4fb0b024,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d96803c12bd99fa6bd82b749a5be38aeadc3b95d111bb18a880c6de4f3d52b,PodSandboxId:42afe5f9c960987975bfe8ca637a4516b4c1c2f50a5337ea63eff3d0dba22794,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723462739738378077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 050939a3421e4dbf0d64dbdab1a87eea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1db6d5f2b59f868bdf3a607b90e20a8b1c3eeedc40e82ed2f99aca04dfe3b35,PodSandboxId:4b71f22c1e2bcc34d9b83aedc15aa39f02ba57019dd41b8c0464d9e278f607a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723462739732954083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0d9ac8bb2a53c92b98cac1eb046590d,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836e90128b6673ef5b8b51c5a9731111750be32cc667eb98b2fda3493fb2a904,PodSandboxId:41ae1e9bddc69e53d36f259dcda505bfe3fc20eb20ea68b6af1bb761f40b7e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723462739658770127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-535697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0f5a7d374a17348fe72e2d6e16cba1,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80d470de-7095-402a-85d5-410833aed761 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f59fb817b1b68       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   7 seconds ago       Running             kube-scheduler            1                   a78b868fa2aaa       kube-scheduler-kubernetes-upgrade-535697
	01d96803c12bd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      1                   42afe5f9c9609       etcd-kubernetes-upgrade-535697
	e1db6d5f2b59f       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   1                   4b71f22c1e2bc       kube-controller-manager-kubernetes-upgrade-535697
	836e90128b667       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            1                   41ae1e9bddc69       kube-apiserver-kubernetes-upgrade-535697
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-535697
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-535697
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=kubernetes-upgrade-535697
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_39_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:39:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-535697
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:39:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:39:04 +0000   Mon, 12 Aug 2024 11:39:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:39:04 +0000   Mon, 12 Aug 2024 11:39:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:39:04 +0000   Mon, 12 Aug 2024 11:39:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:39:04 +0000   Mon, 12 Aug 2024 11:39:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.39
	  Hostname:    kubernetes-upgrade-535697
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 07e1358e5ec146aa8374aedf3d005317
	  System UUID:                07e1358e-5ec1-46aa-8374-aedf3d005317
	  Boot ID:                    4a353070-9c11-494d-8505-75f4a37dfc66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-535697                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-535697             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-535697    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-535697             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node kubernetes-upgrade-535697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node kubernetes-upgrade-535697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node kubernetes-upgrade-535697 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.069179] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059822] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.200392] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.155201] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.322733] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.303870] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.070720] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.172217] systemd-fstab-generator[857]: Ignoring "noauto" option for root device
	[  +6.115428] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.091196] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.169224] kauditd_printk_skb: 23 callbacks suppressed
	[Aug12 11:33] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.733041] systemd-fstab-generator[2596]: Ignoring "noauto" option for root device
	[  +0.226510] systemd-fstab-generator[2698]: Ignoring "noauto" option for root device
	[  +0.309900] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.149175] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.295793] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[Aug12 11:34] systemd-fstab-generator[3088]: Ignoring "noauto" option for root device
	[  +0.104933] kauditd_printk_skb: 191 callbacks suppressed
	[  +2.603091] systemd-fstab-generator[3208]: Ignoring "noauto" option for root device
	[Aug12 11:38] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.642250] systemd-fstab-generator[9232]: Ignoring "noauto" option for root device
	[Aug12 11:39] systemd-fstab-generator[9556]: Ignoring "noauto" option for root device
	[  +0.101102] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.104622] systemd-fstab-generator[9626]: Ignoring "noauto" option for root device
	
	
	==> etcd [01d96803c12bd99fa6bd82b749a5be38aeadc3b95d111bb18a880c6de4f3d52b] <==
	{"level":"info","ts":"2024-08-12T11:39:00.149367Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T11:39:00.151966Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec29e853f5cd425a","initial-advertise-peer-urls":["https://192.168.50.39:2380"],"listen-peer-urls":["https://192.168.50.39:2380"],"advertise-client-urls":["https://192.168.50.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T11:39:00.152254Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T11:39:00.152443Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-08-12T11:39:00.155361Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-08-12T11:39:00.460561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-12T11:39:00.460647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-12T11:39:00.460696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgPreVoteResp from ec29e853f5cd425a at term 1"}
	{"level":"info","ts":"2024-08-12T11:39:00.460733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:39:00.460757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgVoteResp from ec29e853f5cd425a at term 2"}
	{"level":"info","ts":"2024-08-12T11:39:00.460784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became leader at term 2"}
	{"level":"info","ts":"2024-08-12T11:39:00.460811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec29e853f5cd425a elected leader ec29e853f5cd425a at term 2"}
	{"level":"info","ts":"2024-08-12T11:39:00.463763Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec29e853f5cd425a","local-member-attributes":"{Name:kubernetes-upgrade-535697 ClientURLs:[https://192.168.50.39:2379]}","request-path":"/0/members/ec29e853f5cd425a/attributes","cluster-id":"16343206fca1ffcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:39:00.463853Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:39:00.463902Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:39:00.470553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:39:00.470596Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T11:39:00.463932Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:39:00.471695Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:39:00.472378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.39:2379"}
	{"level":"info","ts":"2024-08-12T11:39:00.472712Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:39:00.472798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:39:00.472837Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:39:00.473847Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:39:00.478267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:39:07 up 7 min,  0 users,  load average: 1.28, 0.51, 0.23
	Linux kubernetes-upgrade-535697 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [836e90128b6673ef5b8b51c5a9731111750be32cc667eb98b2fda3493fb2a904] <==
	I0812 11:39:02.045086       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 11:39:02.046291       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0812 11:39:02.058320       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 11:39:02.070753       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 11:39:02.070833       1 policy_source.go:224] refreshing policies
	E0812 11:39:02.109346       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E0812 11:39:02.131159       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0812 11:39:02.132343       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0812 11:39:02.132511       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0812 11:39:02.132518       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0812 11:39:02.157834       1 controller.go:615] quota admission added evaluator for: namespaces
	I0812 11:39:02.333928       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 11:39:02.943821       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0812 11:39:02.949076       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0812 11:39:02.949108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 11:39:03.724367       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 11:39:03.774138       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 11:39:03.872414       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0812 11:39:03.896658       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.39]
	I0812 11:39:03.897621       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 11:39:03.906999       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 11:39:04.056076       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 11:39:04.682714       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 11:39:04.716610       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0812 11:39:04.742647       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e1db6d5f2b59f868bdf3a607b90e20a8b1c3eeedc40e82ed2f99aca04dfe3b35] <==
	I0812 11:39:06.453341       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I0812 11:39:06.453453       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0812 11:39:06.453531       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0812 11:39:06.453552       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0812 11:39:06.704820       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0812 11:39:06.704899       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I0812 11:39:06.705254       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0812 11:39:06.705285       1 shared_informer.go:313] Waiting for caches to sync for node
	I0812 11:39:06.851882       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I0812 11:39:06.852006       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I0812 11:39:06.852028       1 shared_informer.go:313] Waiting for caches to sync for job
	I0812 11:39:07.002684       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I0812 11:39:07.002796       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0812 11:39:07.002819       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0812 11:39:07.151769       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I0812 11:39:07.151862       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0812 11:39:07.151872       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0812 11:39:07.301415       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I0812 11:39:07.301528       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0812 11:39:07.301540       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0812 11:39:07.349800       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I0812 11:39:07.349833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0812 11:39:07.349883       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0812 11:39:07.349903       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0812 11:39:07.349923       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	
	
	==> kube-scheduler [f59fb817b1b6893cebef854497be9af0905693aac93ea6544b2727c64b8410e2] <==
	W0812 11:39:02.032839       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:39:02.032885       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0812 11:39:02.842706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 11:39:02.842837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:02.855430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 11:39:02.855614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:02.978723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:39:02.978775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.051320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:39:03.051354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.105148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:39:03.105183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.125387       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:39:03.125440       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0812 11:39:03.174886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 11:39:03.174956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.330695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:39:03.330747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.415659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:39:03.416231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.428351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:39:03.428711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:39:03.495544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:39:03.495683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0812 11:39:05.489923       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.726948    9563 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: E0812 11:39:04.729364    9563 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462744729099745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: E0812 11:39:04.729389    9563 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723462744729099745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: E0812 11:39:04.775978    9563 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-535697\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.828783    9563 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.853026    9563 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.853121    9563 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913580    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/942ca8c7598091716fbee15e4fb0b024-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-535697\" (UID: \"942ca8c7598091716fbee15e4fb0b024\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913657    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/050939a3421e4dbf0d64dbdab1a87eea-etcd-data\") pod \"etcd-kubernetes-upgrade-535697\" (UID: \"050939a3421e4dbf0d64dbdab1a87eea\") " pod="kube-system/etcd-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913700    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e0f5a7d374a17348fe72e2d6e16cba1-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-535697\" (UID: \"8e0f5a7d374a17348fe72e2d6e16cba1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913723    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e0f5a7d374a17348fe72e2d6e16cba1-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-535697\" (UID: \"8e0f5a7d374a17348fe72e2d6e16cba1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913756    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0d9ac8bb2a53c92b98cac1eb046590d-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-535697\" (UID: \"b0d9ac8bb2a53c92b98cac1eb046590d\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913784    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0d9ac8bb2a53c92b98cac1eb046590d-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-535697\" (UID: \"b0d9ac8bb2a53c92b98cac1eb046590d\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913805    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/050939a3421e4dbf0d64dbdab1a87eea-etcd-certs\") pod \"etcd-kubernetes-upgrade-535697\" (UID: \"050939a3421e4dbf0d64dbdab1a87eea\") " pod="kube-system/etcd-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913828    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e0f5a7d374a17348fe72e2d6e16cba1-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-535697\" (UID: \"8e0f5a7d374a17348fe72e2d6e16cba1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913849    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0d9ac8bb2a53c92b98cac1eb046590d-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-535697\" (UID: \"b0d9ac8bb2a53c92b98cac1eb046590d\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913878    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b0d9ac8bb2a53c92b98cac1eb046590d-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-535697\" (UID: \"b0d9ac8bb2a53c92b98cac1eb046590d\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697"
	Aug 12 11:39:04 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:04.913899    9563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0d9ac8bb2a53c92b98cac1eb046590d-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-535697\" (UID: \"b0d9ac8bb2a53c92b98cac1eb046590d\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.599251    9563 apiserver.go:52] "Watching apiserver"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.612663    9563 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.648045    9563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-535697" podStartSLOduration=1.6480148780000001 podStartE2EDuration="1.648014878s" podCreationTimestamp="2024-08-12 11:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 11:39:05.647810722 +0000 UTC m=+1.148804422" watchObservedRunningTime="2024-08-12 11:39:05.648014878 +0000 UTC m=+1.149008596"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.664866    9563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-535697" podStartSLOduration=1.664849279 podStartE2EDuration="1.664849279s" podCreationTimestamp="2024-08-12 11:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 11:39:05.664327736 +0000 UTC m=+1.165321456" watchObservedRunningTime="2024-08-12 11:39:05.664849279 +0000 UTC m=+1.165842998"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.707166    9563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-535697" podStartSLOduration=1.7071472600000002 podStartE2EDuration="1.70714726s" podCreationTimestamp="2024-08-12 11:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 11:39:05.684782375 +0000 UTC m=+1.185776091" watchObservedRunningTime="2024-08-12 11:39:05.70714726 +0000 UTC m=+1.208140979"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: I0812 11:39:05.707255    9563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-535697" podStartSLOduration=1.7072496959999999 podStartE2EDuration="1.707249696s" podCreationTimestamp="2024-08-12 11:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 11:39:05.703254141 +0000 UTC m=+1.204247860" watchObservedRunningTime="2024-08-12 11:39:05.707249696 +0000 UTC m=+1.208243414"
	Aug 12 11:39:05 kubernetes-upgrade-535697 kubelet[9563]: E0812 11:39:05.738897    9563 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-535697\" already exists" pod="kube-system/etcd-kubernetes-upgrade-535697"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-535697 -n kubernetes-upgrade-535697
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-535697 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-535697 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-535697 describe pod storage-provisioner: exit status 1 (61.451038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-535697 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-535697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-535697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-535697: (1.141141966s)
--- FAIL: TestKubernetesUpgrade (726.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (290.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m50.399342579s)

                                                
                                                
-- stdout --
	* [old-k8s-version-835962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-835962" primary control-plane node in "old-k8s-version-835962" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:32:23.476968   53678 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:32:23.477165   53678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:32:23.477194   53678 out.go:304] Setting ErrFile to fd 2...
	I0812 11:32:23.477210   53678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:32:23.477546   53678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:32:23.478266   53678 out.go:298] Setting JSON to false
	I0812 11:32:23.479602   53678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4484,"bootTime":1723457859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:32:23.479818   53678 start.go:139] virtualization: kvm guest
	I0812 11:32:23.482269   53678 out.go:177] * [old-k8s-version-835962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:32:23.483871   53678 notify.go:220] Checking for updates...
	I0812 11:32:23.483949   53678 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:32:23.485302   53678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:32:23.487076   53678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:32:23.488726   53678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:32:23.490166   53678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:32:23.491555   53678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:32:23.493577   53678 config.go:182] Loaded profile config "cert-expiration-002803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:32:23.493785   53678 config.go:182] Loaded profile config "kubernetes-upgrade-535697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:32:23.493965   53678 config.go:182] Loaded profile config "pause-693259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:32:23.494132   53678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:32:23.541218   53678 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 11:32:23.542999   53678 start.go:297] selected driver: kvm2
	I0812 11:32:23.543018   53678 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:32:23.543030   53678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:32:23.543971   53678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:32:23.544079   53678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:32:23.564702   53678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:32:23.564766   53678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:32:23.565116   53678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:32:23.565160   53678 cni.go:84] Creating CNI manager for ""
	I0812 11:32:23.565182   53678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:32:23.565199   53678 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:32:23.565279   53678 start.go:340] cluster config:
	{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:32:23.565456   53678 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:32:23.568267   53678 out.go:177] * Starting "old-k8s-version-835962" primary control-plane node in "old-k8s-version-835962" cluster
	I0812 11:32:23.569494   53678 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:32:23.569546   53678 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:32:23.569602   53678 cache.go:56] Caching tarball of preloaded images
	I0812 11:32:23.569706   53678 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:32:23.569721   53678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0812 11:32:23.569829   53678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/config.json ...
	I0812 11:32:23.569852   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/config.json: {Name:mkc20043fac2507ed87b9be888012b85672e4de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:32:23.570014   53678 start.go:360] acquireMachinesLock for old-k8s-version-835962: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:32:45.449714   53678 start.go:364] duration metric: took 21.879673117s to acquireMachinesLock for "old-k8s-version-835962"
	I0812 11:32:45.449808   53678 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:32:45.449932   53678 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 11:32:45.452210   53678 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 11:32:45.452439   53678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:32:45.452497   53678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:32:45.473965   53678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0812 11:32:45.474500   53678 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:32:45.475057   53678 main.go:141] libmachine: Using API Version  1
	I0812 11:32:45.475077   53678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:32:45.475481   53678 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:32:45.475691   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:32:45.475867   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:32:45.476070   53678 start.go:159] libmachine.API.Create for "old-k8s-version-835962" (driver="kvm2")
	I0812 11:32:45.476101   53678 client.go:168] LocalClient.Create starting
	I0812 11:32:45.476137   53678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 11:32:45.476176   53678 main.go:141] libmachine: Decoding PEM data...
	I0812 11:32:45.476197   53678 main.go:141] libmachine: Parsing certificate...
	I0812 11:32:45.476264   53678 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 11:32:45.476293   53678 main.go:141] libmachine: Decoding PEM data...
	I0812 11:32:45.476318   53678 main.go:141] libmachine: Parsing certificate...
	I0812 11:32:45.476342   53678 main.go:141] libmachine: Running pre-create checks...
	I0812 11:32:45.476356   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .PreCreateCheck
	I0812 11:32:45.476698   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetConfigRaw
	I0812 11:32:45.477107   53678 main.go:141] libmachine: Creating machine...
	I0812 11:32:45.477121   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .Create
	I0812 11:32:45.477245   53678 main.go:141] libmachine: (old-k8s-version-835962) Creating KVM machine...
	I0812 11:32:45.478390   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found existing default KVM network
	I0812 11:32:45.479999   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:45.479839   53918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001177e0}
	I0812 11:32:45.480019   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | created network xml: 
	I0812 11:32:45.480028   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | <network>
	I0812 11:32:45.480041   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   <name>mk-old-k8s-version-835962</name>
	I0812 11:32:45.480288   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   <dns enable='no'/>
	I0812 11:32:45.480300   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   
	I0812 11:32:45.480315   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 11:32:45.480348   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |     <dhcp>
	I0812 11:32:45.480368   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 11:32:45.480377   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |     </dhcp>
	I0812 11:32:45.480389   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   </ip>
	I0812 11:32:45.480395   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG |   
	I0812 11:32:45.480402   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | </network>
	I0812 11:32:45.480413   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | 
	I0812 11:32:45.485802   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | trying to create private KVM network mk-old-k8s-version-835962 192.168.39.0/24...
	I0812 11:32:45.560711   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | private KVM network mk-old-k8s-version-835962 192.168.39.0/24 created
	I0812 11:32:45.560746   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962 ...
	I0812 11:32:45.560762   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:45.560718   53918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:32:45.560784   53678 main.go:141] libmachine: (old-k8s-version-835962) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 11:32:45.560897   53678 main.go:141] libmachine: (old-k8s-version-835962) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 11:32:45.796742   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:45.796628   53918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa...
	I0812 11:32:45.901836   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:45.901686   53918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/old-k8s-version-835962.rawdisk...
	I0812 11:32:45.901869   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Writing magic tar header
	I0812 11:32:45.901889   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Writing SSH key tar header
	I0812 11:32:45.901902   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:45.901838   53918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962 ...
	I0812 11:32:45.901996   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962
	I0812 11:32:45.902024   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962 (perms=drwx------)
	I0812 11:32:45.902037   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 11:32:45.902049   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 11:32:45.902065   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 11:32:45.902075   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 11:32:45.902096   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:32:45.902116   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 11:32:45.902127   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 11:32:45.902148   53678 main.go:141] libmachine: (old-k8s-version-835962) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 11:32:45.902159   53678 main.go:141] libmachine: (old-k8s-version-835962) Creating domain...
	I0812 11:32:45.902171   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 11:32:45.902179   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home/jenkins
	I0812 11:32:45.902191   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Checking permissions on dir: /home
	I0812 11:32:45.902224   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Skipping /home - not owner
	I0812 11:32:45.903348   53678 main.go:141] libmachine: (old-k8s-version-835962) define libvirt domain using xml: 
	I0812 11:32:45.903366   53678 main.go:141] libmachine: (old-k8s-version-835962) <domain type='kvm'>
	I0812 11:32:45.903402   53678 main.go:141] libmachine: (old-k8s-version-835962)   <name>old-k8s-version-835962</name>
	I0812 11:32:45.903420   53678 main.go:141] libmachine: (old-k8s-version-835962)   <memory unit='MiB'>2200</memory>
	I0812 11:32:45.903426   53678 main.go:141] libmachine: (old-k8s-version-835962)   <vcpu>2</vcpu>
	I0812 11:32:45.903433   53678 main.go:141] libmachine: (old-k8s-version-835962)   <features>
	I0812 11:32:45.903438   53678 main.go:141] libmachine: (old-k8s-version-835962)     <acpi/>
	I0812 11:32:45.903444   53678 main.go:141] libmachine: (old-k8s-version-835962)     <apic/>
	I0812 11:32:45.903450   53678 main.go:141] libmachine: (old-k8s-version-835962)     <pae/>
	I0812 11:32:45.903457   53678 main.go:141] libmachine: (old-k8s-version-835962)     
	I0812 11:32:45.903463   53678 main.go:141] libmachine: (old-k8s-version-835962)   </features>
	I0812 11:32:45.903467   53678 main.go:141] libmachine: (old-k8s-version-835962)   <cpu mode='host-passthrough'>
	I0812 11:32:45.903473   53678 main.go:141] libmachine: (old-k8s-version-835962)   
	I0812 11:32:45.903480   53678 main.go:141] libmachine: (old-k8s-version-835962)   </cpu>
	I0812 11:32:45.903485   53678 main.go:141] libmachine: (old-k8s-version-835962)   <os>
	I0812 11:32:45.903494   53678 main.go:141] libmachine: (old-k8s-version-835962)     <type>hvm</type>
	I0812 11:32:45.903499   53678 main.go:141] libmachine: (old-k8s-version-835962)     <boot dev='cdrom'/>
	I0812 11:32:45.903506   53678 main.go:141] libmachine: (old-k8s-version-835962)     <boot dev='hd'/>
	I0812 11:32:45.903522   53678 main.go:141] libmachine: (old-k8s-version-835962)     <bootmenu enable='no'/>
	I0812 11:32:45.903547   53678 main.go:141] libmachine: (old-k8s-version-835962)   </os>
	I0812 11:32:45.903558   53678 main.go:141] libmachine: (old-k8s-version-835962)   <devices>
	I0812 11:32:45.903567   53678 main.go:141] libmachine: (old-k8s-version-835962)     <disk type='file' device='cdrom'>
	I0812 11:32:45.903602   53678 main.go:141] libmachine: (old-k8s-version-835962)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/boot2docker.iso'/>
	I0812 11:32:45.903629   53678 main.go:141] libmachine: (old-k8s-version-835962)       <target dev='hdc' bus='scsi'/>
	I0812 11:32:45.903662   53678 main.go:141] libmachine: (old-k8s-version-835962)       <readonly/>
	I0812 11:32:45.903679   53678 main.go:141] libmachine: (old-k8s-version-835962)     </disk>
	I0812 11:32:45.903693   53678 main.go:141] libmachine: (old-k8s-version-835962)     <disk type='file' device='disk'>
	I0812 11:32:45.903706   53678 main.go:141] libmachine: (old-k8s-version-835962)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 11:32:45.903721   53678 main.go:141] libmachine: (old-k8s-version-835962)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/old-k8s-version-835962.rawdisk'/>
	I0812 11:32:45.903731   53678 main.go:141] libmachine: (old-k8s-version-835962)       <target dev='hda' bus='virtio'/>
	I0812 11:32:45.903740   53678 main.go:141] libmachine: (old-k8s-version-835962)     </disk>
	I0812 11:32:45.903754   53678 main.go:141] libmachine: (old-k8s-version-835962)     <interface type='network'>
	I0812 11:32:45.903774   53678 main.go:141] libmachine: (old-k8s-version-835962)       <source network='mk-old-k8s-version-835962'/>
	I0812 11:32:45.903793   53678 main.go:141] libmachine: (old-k8s-version-835962)       <model type='virtio'/>
	I0812 11:32:45.903827   53678 main.go:141] libmachine: (old-k8s-version-835962)     </interface>
	I0812 11:32:45.903839   53678 main.go:141] libmachine: (old-k8s-version-835962)     <interface type='network'>
	I0812 11:32:45.903848   53678 main.go:141] libmachine: (old-k8s-version-835962)       <source network='default'/>
	I0812 11:32:45.903859   53678 main.go:141] libmachine: (old-k8s-version-835962)       <model type='virtio'/>
	I0812 11:32:45.903871   53678 main.go:141] libmachine: (old-k8s-version-835962)     </interface>
	I0812 11:32:45.903882   53678 main.go:141] libmachine: (old-k8s-version-835962)     <serial type='pty'>
	I0812 11:32:45.903891   53678 main.go:141] libmachine: (old-k8s-version-835962)       <target port='0'/>
	I0812 11:32:45.903899   53678 main.go:141] libmachine: (old-k8s-version-835962)     </serial>
	I0812 11:32:45.903910   53678 main.go:141] libmachine: (old-k8s-version-835962)     <console type='pty'>
	I0812 11:32:45.903919   53678 main.go:141] libmachine: (old-k8s-version-835962)       <target type='serial' port='0'/>
	I0812 11:32:45.903930   53678 main.go:141] libmachine: (old-k8s-version-835962)     </console>
	I0812 11:32:45.903940   53678 main.go:141] libmachine: (old-k8s-version-835962)     <rng model='virtio'>
	I0812 11:32:45.903951   53678 main.go:141] libmachine: (old-k8s-version-835962)       <backend model='random'>/dev/random</backend>
	I0812 11:32:45.903963   53678 main.go:141] libmachine: (old-k8s-version-835962)     </rng>
	I0812 11:32:45.903975   53678 main.go:141] libmachine: (old-k8s-version-835962)     
	I0812 11:32:45.903987   53678 main.go:141] libmachine: (old-k8s-version-835962)     
	I0812 11:32:45.903997   53678 main.go:141] libmachine: (old-k8s-version-835962)   </devices>
	I0812 11:32:45.904008   53678 main.go:141] libmachine: (old-k8s-version-835962) </domain>
	I0812 11:32:45.904018   53678 main.go:141] libmachine: (old-k8s-version-835962) 
	I0812 11:32:45.908612   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:09:e0:7d in network default
	I0812 11:32:45.909333   53678 main.go:141] libmachine: (old-k8s-version-835962) Ensuring networks are active...
	I0812 11:32:45.909358   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:45.910129   53678 main.go:141] libmachine: (old-k8s-version-835962) Ensuring network default is active
	I0812 11:32:45.910588   53678 main.go:141] libmachine: (old-k8s-version-835962) Ensuring network mk-old-k8s-version-835962 is active
	I0812 11:32:45.911276   53678 main.go:141] libmachine: (old-k8s-version-835962) Getting domain xml...
	I0812 11:32:45.912074   53678 main.go:141] libmachine: (old-k8s-version-835962) Creating domain...
	I0812 11:32:47.294792   53678 main.go:141] libmachine: (old-k8s-version-835962) Waiting to get IP...
	I0812 11:32:47.295835   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:47.296407   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:47.296435   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:47.296377   53918 retry.go:31] will retry after 217.847142ms: waiting for machine to come up
	I0812 11:32:47.517738   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:47.518304   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:47.518329   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:47.518275   53918 retry.go:31] will retry after 374.79379ms: waiting for machine to come up
	I0812 11:32:47.895181   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:47.895849   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:47.895877   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:47.895808   53918 retry.go:31] will retry after 400.015324ms: waiting for machine to come up
	I0812 11:32:48.297447   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:48.298065   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:48.298095   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:48.297987   53918 retry.go:31] will retry after 477.597887ms: waiting for machine to come up
	I0812 11:32:48.777998   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:48.778555   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:48.778605   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:48.778514   53918 retry.go:31] will retry after 582.249287ms: waiting for machine to come up
	I0812 11:32:49.362353   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:49.362901   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:49.362927   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:49.362859   53918 retry.go:31] will retry after 944.34775ms: waiting for machine to come up
	I0812 11:32:50.309021   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:50.309554   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:50.309590   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:50.309500   53918 retry.go:31] will retry after 916.767259ms: waiting for machine to come up
	I0812 11:32:51.228285   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:51.228836   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:51.228878   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:51.228790   53918 retry.go:31] will retry after 1.311335141s: waiting for machine to come up
	I0812 11:32:52.542190   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:52.542752   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:52.542782   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:52.542695   53918 retry.go:31] will retry after 1.708102138s: waiting for machine to come up
	I0812 11:32:54.252261   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:54.252765   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:54.252794   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:54.252702   53918 retry.go:31] will retry after 1.520033568s: waiting for machine to come up
	I0812 11:32:55.775029   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:55.775692   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:55.775715   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:55.775625   53918 retry.go:31] will retry after 2.401234683s: waiting for machine to come up
	I0812 11:32:58.179859   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:32:58.180332   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:32:58.180360   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:32:58.180291   53918 retry.go:31] will retry after 3.458085684s: waiting for machine to come up
	I0812 11:33:01.640188   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:01.640674   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:33:01.640703   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:33:01.640633   53918 retry.go:31] will retry after 3.33946049s: waiting for machine to come up
	I0812 11:33:04.982685   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:04.983285   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:33:04.983315   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:33:04.983235   53918 retry.go:31] will retry after 3.477684582s: waiting for machine to come up
	I0812 11:33:08.462292   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.462884   53678 main.go:141] libmachine: (old-k8s-version-835962) Found IP for machine: 192.168.39.17
	I0812 11:33:08.462906   53678 main.go:141] libmachine: (old-k8s-version-835962) Reserving static IP address...
	I0812 11:33:08.462916   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has current primary IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.463252   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-835962", mac: "52:54:00:a2:4c:33", ip: "192.168.39.17"} in network mk-old-k8s-version-835962
	I0812 11:33:08.546394   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Getting to WaitForSSH function...
	I0812 11:33:08.546434   53678 main.go:141] libmachine: (old-k8s-version-835962) Reserved static IP address: 192.168.39.17
	I0812 11:33:08.546449   53678 main.go:141] libmachine: (old-k8s-version-835962) Waiting for SSH to be available...
	I0812 11:33:08.549424   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.549796   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:08.549817   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.550028   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Using SSH client type: external
	I0812 11:33:08.550053   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa (-rw-------)
	I0812 11:33:08.550080   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:33:08.550099   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | About to run SSH command:
	I0812 11:33:08.550118   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | exit 0
	I0812 11:33:08.680656   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | SSH cmd err, output: <nil>: 
	I0812 11:33:08.680946   53678 main.go:141] libmachine: (old-k8s-version-835962) KVM machine creation complete!
	I0812 11:33:08.681437   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetConfigRaw
	I0812 11:33:08.681960   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:08.682299   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:08.682530   53678 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 11:33:08.682576   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetState
	I0812 11:33:08.683949   53678 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 11:33:08.683961   53678 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 11:33:08.683966   53678 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 11:33:08.683972   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:08.686652   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.687086   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:08.687113   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.687306   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:08.687480   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.687626   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.687751   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:08.687906   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:08.688166   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:08.688189   53678 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 11:33:08.804228   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:33:08.804261   53678 main.go:141] libmachine: Detecting the provisioner...
	I0812 11:33:08.804273   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:08.807171   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.807534   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:08.807575   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.807750   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:08.807975   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.808163   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.808296   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:08.808535   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:08.808707   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:08.808727   53678 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 11:33:08.929734   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 11:33:08.929799   53678 main.go:141] libmachine: found compatible host: buildroot
	I0812 11:33:08.929805   53678 main.go:141] libmachine: Provisioning with buildroot...
	I0812 11:33:08.929813   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:33:08.930055   53678 buildroot.go:166] provisioning hostname "old-k8s-version-835962"
	I0812 11:33:08.930080   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:33:08.930300   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:08.933358   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.933794   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:08.933823   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:08.934126   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:08.934311   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.934484   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:08.934625   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:08.934834   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:08.935006   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:08.935019   53678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-835962 && echo "old-k8s-version-835962" | sudo tee /etc/hostname
	I0812 11:33:09.063863   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-835962
	
	I0812 11:33:09.063896   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.066993   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.067369   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.067401   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.067553   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:09.067784   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.067948   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.068076   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:09.068241   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:09.068413   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:09.068431   53678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-835962' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-835962/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-835962' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:33:09.193453   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:33:09.193480   53678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:33:09.193527   53678 buildroot.go:174] setting up certificates
	I0812 11:33:09.193539   53678 provision.go:84] configureAuth start
	I0812 11:33:09.193554   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:33:09.193843   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:33:09.196542   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.196890   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.196919   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.197061   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.199504   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.199878   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.199917   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.200082   53678 provision.go:143] copyHostCerts
	I0812 11:33:09.200155   53678 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:33:09.200168   53678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:33:09.200231   53678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:33:09.200390   53678 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:33:09.200403   53678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:33:09.200456   53678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:33:09.200563   53678 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:33:09.200573   53678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:33:09.200602   53678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:33:09.200688   53678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-835962 san=[127.0.0.1 192.168.39.17 localhost minikube old-k8s-version-835962]
	I0812 11:33:09.330433   53678 provision.go:177] copyRemoteCerts
	I0812 11:33:09.330499   53678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:33:09.330522   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.334132   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.334508   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.334548   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.334791   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:09.335005   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.335202   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:09.335379   53678 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:33:09.423570   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:33:09.449500   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0812 11:33:09.474111   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 11:33:09.499343   53678 provision.go:87] duration metric: took 305.791018ms to configureAuth
	I0812 11:33:09.499372   53678 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:33:09.499594   53678 config.go:182] Loaded profile config "old-k8s-version-835962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:33:09.499678   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.502542   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.502907   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.502935   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.503177   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:09.503391   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.503558   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.503711   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:09.503884   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:09.504123   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:09.504139   53678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:33:09.772322   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:33:09.772349   53678 main.go:141] libmachine: Checking connection to Docker...
	I0812 11:33:09.772361   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetURL
	I0812 11:33:09.773967   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | Using libvirt version 6000000
	I0812 11:33:09.776498   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.777047   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.777080   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.777285   53678 main.go:141] libmachine: Docker is up and running!
	I0812 11:33:09.777299   53678 main.go:141] libmachine: Reticulating splines...
	I0812 11:33:09.777306   53678 client.go:171] duration metric: took 24.301196677s to LocalClient.Create
	I0812 11:33:09.777326   53678 start.go:167] duration metric: took 24.301259213s to libmachine.API.Create "old-k8s-version-835962"
	I0812 11:33:09.777335   53678 start.go:293] postStartSetup for "old-k8s-version-835962" (driver="kvm2")
	I0812 11:33:09.777344   53678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:33:09.777357   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:09.777680   53678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:33:09.777703   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.780033   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.780354   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.780381   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.780534   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:09.780733   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.780914   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:09.781070   53678 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:33:09.867387   53678 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:33:09.871946   53678 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:33:09.871973   53678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:33:09.872032   53678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:33:09.872102   53678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:33:09.872190   53678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:33:09.882436   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:33:09.905264   53678 start.go:296] duration metric: took 127.914997ms for postStartSetup
	I0812 11:33:09.905318   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetConfigRaw
	I0812 11:33:09.905928   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:33:09.909138   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.909624   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.909656   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.909865   53678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/config.json ...
	I0812 11:33:09.910065   53678 start.go:128] duration metric: took 24.460124343s to createHost
	I0812 11:33:09.910091   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:09.912913   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.913299   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:09.913320   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:09.913667   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:09.913913   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.914095   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:09.914240   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:09.914441   53678 main.go:141] libmachine: Using SSH client type: native
	I0812 11:33:09.914643   53678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:33:09.914663   53678 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 11:33:10.029361   53678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723462390.003180301
	
	I0812 11:33:10.029386   53678 fix.go:216] guest clock: 1723462390.003180301
	I0812 11:33:10.029398   53678 fix.go:229] Guest: 2024-08-12 11:33:10.003180301 +0000 UTC Remote: 2024-08-12 11:33:09.910076897 +0000 UTC m=+46.489095159 (delta=93.103404ms)
	I0812 11:33:10.029424   53678 fix.go:200] guest clock delta is within tolerance: 93.103404ms
	I0812 11:33:10.029430   53678 start.go:83] releasing machines lock for "old-k8s-version-835962", held for 24.579657708s
	I0812 11:33:10.029461   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:10.029762   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:33:10.032576   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.032906   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:10.032934   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.033075   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:10.033583   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:10.033771   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:33:10.033869   53678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:33:10.033911   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:10.033992   53678 ssh_runner.go:195] Run: cat /version.json
	I0812 11:33:10.034020   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:33:10.036821   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.037122   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.037239   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:10.037264   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.037391   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:10.037570   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:10.037653   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:10.037682   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:10.037735   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:10.037824   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:33:10.037885   53678 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:33:10.037979   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:33:10.038134   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:33:10.038278   53678 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:33:10.150936   53678 ssh_runner.go:195] Run: systemctl --version
	I0812 11:33:10.156961   53678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:33:10.323907   53678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:33:10.330608   53678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:33:10.330698   53678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:33:10.349357   53678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:33:10.349381   53678 start.go:495] detecting cgroup driver to use...
	I0812 11:33:10.349488   53678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:33:10.367092   53678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:33:10.382709   53678 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:33:10.382794   53678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:33:10.396614   53678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:33:10.412843   53678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:33:10.531462   53678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:33:10.710303   53678 docker.go:233] disabling docker service ...
	I0812 11:33:10.710369   53678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:33:10.725607   53678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:33:10.739790   53678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:33:10.902361   53678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:33:11.054283   53678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:33:11.069523   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:33:11.090520   53678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0812 11:33:11.090592   53678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:33:11.102704   53678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:33:11.102785   53678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:33:11.114882   53678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:33:11.126901   53678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:33:11.138805   53678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:33:11.150922   53678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:33:11.161934   53678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:33:11.162001   53678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:33:11.177179   53678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:33:11.188212   53678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:33:11.312655   53678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:33:11.461455   53678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:33:11.461532   53678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:33:11.466370   53678 start.go:563] Will wait 60s for crictl version
	I0812 11:33:11.466431   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:11.470310   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:33:11.510059   53678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:33:11.510138   53678 ssh_runner.go:195] Run: crio --version
	I0812 11:33:11.539024   53678 ssh_runner.go:195] Run: crio --version
	I0812 11:33:11.571707   53678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0812 11:33:11.573134   53678 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:33:11.576076   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:11.576465   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:33:00 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:33:11.576497   53678 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:33:11.576724   53678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 11:33:11.581256   53678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:33:11.594307   53678 kubeadm.go:883] updating cluster {Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:33:11.594434   53678 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:33:11.594473   53678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:33:11.629047   53678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:33:11.629125   53678 ssh_runner.go:195] Run: which lz4
	I0812 11:33:11.632972   53678 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 11:33:11.636994   53678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:33:11.637020   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0812 11:33:13.125015   53678 crio.go:462] duration metric: took 1.49207782s to copy over tarball
	I0812 11:33:13.125084   53678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:33:15.805119   53678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.679995983s)
	I0812 11:33:15.805150   53678 crio.go:469] duration metric: took 2.680104664s to extract the tarball
	I0812 11:33:15.805157   53678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:33:15.866446   53678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:33:15.920986   53678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:33:15.921015   53678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 11:33:15.921088   53678 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:33:15.921091   53678 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:15.921092   53678 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:15.921146   53678 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0812 11:33:15.921213   53678 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:15.921263   53678 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0812 11:33:15.921413   53678 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:15.921213   53678 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:15.928188   53678 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:15.928252   53678 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:15.928269   53678 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0812 11:33:15.928412   53678 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:33:15.928463   53678 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:15.928599   53678 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0812 11:33:15.929191   53678 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:15.929216   53678 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.169365   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0812 11:33:16.193374   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.193923   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:16.207318   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:16.208120   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:16.212497   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:16.231119   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0812 11:33:16.234820   53678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0812 11:33:16.234859   53678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0812 11:33:16.234900   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.379104   53678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0812 11:33:16.379135   53678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0812 11:33:16.379154   53678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.379167   53678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:16.379217   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.379234   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.396136   53678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0812 11:33:16.396184   53678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:16.396231   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.421026   53678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0812 11:33:16.421082   53678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:16.421082   53678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0812 11:33:16.421123   53678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:16.421129   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.421185   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.429763   53678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0812 11:33:16.429812   53678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0812 11:33:16.429818   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:33:16.429856   53678 ssh_runner.go:195] Run: which crictl
	I0812 11:33:16.429885   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.429935   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:16.429953   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:16.429959   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:16.429990   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:16.582736   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:33:16.582858   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.582969   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:16.583813   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:16.583910   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:16.583945   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:16.583971   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:33:16.747571   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:33:16.747625   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:33:16.749249   53678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:33:16.762133   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:33:16.762165   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:33:16.762165   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:33:16.762228   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:33:16.762239   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:33:16.868647   53678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:33:16.874235   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0812 11:33:17.016859   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0812 11:33:17.016953   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0812 11:33:17.016989   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0812 11:33:17.017030   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0812 11:33:17.017069   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0812 11:33:17.017136   53678 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0812 11:33:17.017176   53678 cache_images.go:92] duration metric: took 1.096145392s to LoadCachedImages
	W0812 11:33:17.017252   53678 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0812 11:33:17.017268   53678 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.20.0 crio true true} ...
	I0812 11:33:17.017393   53678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-835962 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:33:17.017479   53678 ssh_runner.go:195] Run: crio config
	I0812 11:33:17.074824   53678 cni.go:84] Creating CNI manager for ""
	I0812 11:33:17.074853   53678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:33:17.074861   53678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:33:17.074877   53678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-835962 NodeName:old-k8s-version-835962 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0812 11:33:17.074993   53678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-835962"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:33:17.075059   53678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0812 11:33:17.085613   53678 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:33:17.085688   53678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:33:17.096067   53678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0812 11:33:17.114463   53678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:33:17.132481   53678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0812 11:33:17.159686   53678 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0812 11:33:17.163904   53678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:33:17.176722   53678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:33:17.338622   53678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:33:17.359231   53678 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962 for IP: 192.168.39.17
	I0812 11:33:17.359257   53678 certs.go:194] generating shared ca certs ...
	I0812 11:33:17.359275   53678 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.359434   53678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:33:17.359474   53678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:33:17.359484   53678 certs.go:256] generating profile certs ...
	I0812 11:33:17.359532   53678 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.key
	I0812 11:33:17.359562   53678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt with IP's: []
	I0812 11:33:17.461288   53678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt ...
	I0812 11:33:17.461317   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: {Name:mk7c767e28aed6c3ec51f89f05a5a4f8051f51ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.461521   53678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.key ...
	I0812 11:33:17.461541   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.key: {Name:mkea22e76edcdd9f40bf140ff3f2499f5d02fd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.477149   53678 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key.9ec5808d
	I0812 11:33:17.477194   53678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt.9ec5808d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17]
	I0812 11:33:17.665985   53678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt.9ec5808d ...
	I0812 11:33:17.666018   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt.9ec5808d: {Name:mkd5ab470e4e3f307b527b6dac3fd15b379a0206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.666232   53678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key.9ec5808d ...
	I0812 11:33:17.666251   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key.9ec5808d: {Name:mkd9eabdae2823706f514f12d546acdb1759da8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.666337   53678 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt.9ec5808d -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt
	I0812 11:33:17.666428   53678 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key.9ec5808d -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key
	I0812 11:33:17.666490   53678 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key
	I0812 11:33:17.666508   53678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.crt with IP's: []
	I0812 11:33:17.777486   53678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.crt ...
	I0812 11:33:17.777522   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.crt: {Name:mk2b9b97a39f8e84a55a6c218ee9f7cab445a752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.777719   53678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key ...
	I0812 11:33:17.777738   53678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key: {Name:mka786e8b61f68902e5399ea00176dbfe6573b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:33:17.777961   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:33:17.778018   53678 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:33:17.778032   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:33:17.778063   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:33:17.778093   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:33:17.778121   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:33:17.778172   53678 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:33:17.778960   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:33:17.809368   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:33:17.836902   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:33:17.861057   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:33:17.887329   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0812 11:33:17.911113   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:33:17.937216   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:33:17.962084   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 11:33:18.012834   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:33:18.038440   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:33:18.061993   53678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:33:18.087419   53678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:33:18.104387   53678 ssh_runner.go:195] Run: openssl version
	I0812 11:33:18.110095   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:33:18.120980   53678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:33:18.125212   53678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:33:18.125298   53678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:33:18.131242   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:33:18.143143   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:33:18.154634   53678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:33:18.159882   53678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:33:18.159983   53678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:33:18.166526   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:33:18.181072   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:33:18.194535   53678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:33:18.199442   53678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:33:18.199499   53678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:33:18.205439   53678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:33:18.216957   53678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:33:18.221321   53678 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 11:33:18.221381   53678 kubeadm.go:392] StartCluster: {Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:33:18.221473   53678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:33:18.221527   53678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:33:18.262243   53678 cri.go:89] found id: ""
	I0812 11:33:18.262305   53678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:33:18.275126   53678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:33:18.297640   53678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:33:18.311955   53678 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:33:18.311980   53678 kubeadm.go:157] found existing configuration files:
	
	I0812 11:33:18.312042   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:33:18.324699   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:33:18.324760   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:33:18.341134   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:33:18.357575   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:33:18.357633   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:33:18.376613   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:33:18.394013   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:33:18.394086   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:33:18.409596   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:33:18.418610   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:33:18.418671   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:33:18.428355   53678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:33:18.702524   53678 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:35:16.223053   53678 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:35:16.223255   53678 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:35:16.223474   53678 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:35:16.223568   53678 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:35:16.223739   53678 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:35:16.223965   53678 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:35:16.224171   53678 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:35:16.224314   53678 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:35:16.226142   53678 out.go:204]   - Generating certificates and keys ...
	I0812 11:35:16.226264   53678 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:35:16.226349   53678 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:35:16.226446   53678 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 11:35:16.226527   53678 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 11:35:16.226634   53678 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 11:35:16.226742   53678 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 11:35:16.226843   53678 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 11:35:16.227060   53678 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0812 11:35:16.227140   53678 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 11:35:16.227323   53678 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0812 11:35:16.227415   53678 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 11:35:16.227506   53678 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 11:35:16.227578   53678 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 11:35:16.227663   53678 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:35:16.227738   53678 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:35:16.227819   53678 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:35:16.227919   53678 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:35:16.228002   53678 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:35:16.228154   53678 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:35:16.228281   53678 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:35:16.228334   53678 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:35:16.228430   53678 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:35:16.230315   53678 out.go:204]   - Booting up control plane ...
	I0812 11:35:16.230445   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:35:16.230604   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:35:16.230725   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:35:16.230856   53678 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:35:16.231085   53678 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:35:16.231161   53678 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:35:16.231257   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:35:16.231498   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:35:16.231594   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:35:16.231835   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:35:16.231933   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:35:16.232185   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:35:16.232278   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:35:16.232522   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:35:16.232622   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:35:16.232897   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:35:16.232907   53678 kubeadm.go:310] 
	I0812 11:35:16.232971   53678 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:35:16.233029   53678 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:35:16.233039   53678 kubeadm.go:310] 
	I0812 11:35:16.233084   53678 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:35:16.233131   53678 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:35:16.233275   53678 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:35:16.233284   53678 kubeadm.go:310] 
	I0812 11:35:16.233420   53678 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:35:16.233470   53678 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:35:16.233516   53678 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:35:16.233525   53678 kubeadm.go:310] 
	I0812 11:35:16.233694   53678 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:35:16.233808   53678 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:35:16.233818   53678 kubeadm.go:310] 
	I0812 11:35:16.233956   53678 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:35:16.234078   53678 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:35:16.234183   53678 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:35:16.234282   53678 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0812 11:35:16.234426   53678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-835962] and IPs [192.168.39.17 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:35:16.234481   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:35:16.234785   53678 kubeadm.go:310] 
	I0812 11:35:16.951787   53678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:35:16.967774   53678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:35:16.979499   53678 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:35:16.979518   53678 kubeadm.go:157] found existing configuration files:
	
	I0812 11:35:16.979561   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:35:16.990664   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:35:16.990739   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:35:17.001470   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:35:17.014204   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:35:17.014277   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:35:17.025357   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:35:17.035247   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:35:17.035319   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:35:17.045527   53678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:35:17.057136   53678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:35:17.057205   53678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:35:17.069297   53678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:35:17.317597   53678 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:37:13.191720   53678 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:37:13.191839   53678 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:37:13.193285   53678 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:37:13.193350   53678 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:37:13.193430   53678 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:37:13.193538   53678 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:37:13.193673   53678 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:37:13.193773   53678 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:37:13.195586   53678 out.go:204]   - Generating certificates and keys ...
	I0812 11:37:13.195656   53678 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:37:13.195727   53678 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:37:13.195818   53678 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:37:13.195874   53678 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:37:13.195932   53678 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:37:13.195977   53678 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:37:13.196033   53678 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:37:13.196119   53678 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:37:13.196227   53678 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:37:13.196338   53678 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:37:13.196374   53678 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:37:13.196424   53678 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:37:13.196467   53678 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:37:13.196512   53678 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:37:13.196577   53678 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:37:13.196623   53678 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:37:13.196725   53678 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:37:13.196819   53678 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:37:13.196909   53678 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:37:13.197003   53678 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:37:13.198728   53678 out.go:204]   - Booting up control plane ...
	I0812 11:37:13.198837   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:37:13.198922   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:37:13.198998   53678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:37:13.199106   53678 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:37:13.199264   53678 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:37:13.199342   53678 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:37:13.199412   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:37:13.199583   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:37:13.199662   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:37:13.199896   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:37:13.200008   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:37:13.200192   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:37:13.200264   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:37:13.200489   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:37:13.200595   53678 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:37:13.200832   53678 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:37:13.200848   53678 kubeadm.go:310] 
	I0812 11:37:13.200897   53678 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:37:13.200935   53678 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:37:13.200943   53678 kubeadm.go:310] 
	I0812 11:37:13.200971   53678 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:37:13.201005   53678 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:37:13.201093   53678 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:37:13.201100   53678 kubeadm.go:310] 
	I0812 11:37:13.201183   53678 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:37:13.201244   53678 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:37:13.201301   53678 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:37:13.201311   53678 kubeadm.go:310] 
	I0812 11:37:13.201447   53678 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:37:13.201567   53678 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:37:13.201577   53678 kubeadm.go:310] 
	I0812 11:37:13.201719   53678 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:37:13.201840   53678 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:37:13.201912   53678 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:37:13.201995   53678 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:37:13.202002   53678 kubeadm.go:310] 
	I0812 11:37:13.202054   53678 kubeadm.go:394] duration metric: took 3m54.98067673s to StartCluster
	I0812 11:37:13.202086   53678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:37:13.202134   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:37:13.245876   53678 cri.go:89] found id: ""
	I0812 11:37:13.245909   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.245919   53678 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:37:13.245927   53678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:37:13.245988   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:37:13.280782   53678 cri.go:89] found id: ""
	I0812 11:37:13.280808   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.280819   53678 logs.go:278] No container was found matching "etcd"
	I0812 11:37:13.280827   53678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:37:13.280908   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:37:13.313557   53678 cri.go:89] found id: ""
	I0812 11:37:13.313618   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.313631   53678 logs.go:278] No container was found matching "coredns"
	I0812 11:37:13.313639   53678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:37:13.313694   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:37:13.356944   53678 cri.go:89] found id: ""
	I0812 11:37:13.356968   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.356975   53678 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:37:13.356982   53678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:37:13.357042   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:37:13.412345   53678 cri.go:89] found id: ""
	I0812 11:37:13.412380   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.412393   53678 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:37:13.412401   53678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:37:13.412467   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:37:13.446403   53678 cri.go:89] found id: ""
	I0812 11:37:13.446429   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.446439   53678 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:37:13.446446   53678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:37:13.446505   53678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:37:13.479938   53678 cri.go:89] found id: ""
	I0812 11:37:13.479967   53678 logs.go:276] 0 containers: []
	W0812 11:37:13.479978   53678 logs.go:278] No container was found matching "kindnet"
	I0812 11:37:13.479990   53678 logs.go:123] Gathering logs for kubelet ...
	I0812 11:37:13.480004   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:37:13.531296   53678 logs.go:123] Gathering logs for dmesg ...
	I0812 11:37:13.531338   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:37:13.545042   53678 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:37:13.545069   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:37:13.654622   53678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:37:13.654643   53678 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:37:13.654655   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:37:13.756345   53678 logs.go:123] Gathering logs for container status ...
	I0812 11:37:13.756373   53678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:37:13.800759   53678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:37:13.800813   53678 out.go:239] * 
	* 
	W0812 11:37:13.800884   53678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:37:13.800916   53678 out.go:239] * 
	* 
	W0812 11:37:13.802092   53678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:37:13.805169   53678 out.go:177] 
	W0812 11:37:13.806616   53678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:37:13.806660   53678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:37:13.806681   53678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:37:13.808381   53678 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 6 (264.711929ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:37:14.107306   56353 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-835962" does not appear in /home/jenkins/minikube-integration/19409-3774/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-835962" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (290.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-093615 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-093615 --alsologtostderr -v=3: exit status 82 (2m0.484610841s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-093615"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:35:51.630221   55901 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:35:51.630949   55901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:35:51.630972   55901 out.go:304] Setting ErrFile to fd 2...
	I0812 11:35:51.630982   55901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:35:51.631176   55901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:35:51.631441   55901 out.go:298] Setting JSON to false
	I0812 11:35:51.631544   55901 mustload.go:65] Loading cluster: embed-certs-093615
	I0812 11:35:51.631871   55901 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:35:51.631952   55901 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/embed-certs-093615/config.json ...
	I0812 11:35:51.632144   55901 mustload.go:65] Loading cluster: embed-certs-093615
	I0812 11:35:51.632269   55901 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:35:51.632315   55901 stop.go:39] StopHost: embed-certs-093615
	I0812 11:35:51.632750   55901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:35:51.632805   55901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:35:51.648150   55901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	I0812 11:35:51.648593   55901 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:35:51.649211   55901 main.go:141] libmachine: Using API Version  1
	I0812 11:35:51.649240   55901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:35:51.649635   55901 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:35:51.652245   55901 out.go:177] * Stopping node "embed-certs-093615"  ...
	I0812 11:35:51.653612   55901 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 11:35:51.653649   55901 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:35:51.653885   55901 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 11:35:51.653907   55901 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:35:51.657168   55901 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:35:51.657601   55901 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:34:56 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:35:51.657664   55901 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:35:51.657794   55901 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:35:51.657972   55901 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:35:51.658139   55901 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:35:51.658304   55901 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:35:51.746727   55901 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 11:35:51.811134   55901 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 11:35:51.851825   55901 main.go:141] libmachine: Stopping "embed-certs-093615"...
	I0812 11:35:51.851857   55901 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:35:51.853784   55901 main.go:141] libmachine: (embed-certs-093615) Calling .Stop
	I0812 11:35:51.857984   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 0/120
	I0812 11:35:52.859439   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 1/120
	I0812 11:35:53.860720   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 2/120
	I0812 11:35:54.862092   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 3/120
	I0812 11:35:55.863667   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 4/120
	I0812 11:35:56.865911   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 5/120
	I0812 11:35:57.867218   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 6/120
	I0812 11:35:58.868777   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 7/120
	I0812 11:35:59.870252   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 8/120
	I0812 11:36:00.872323   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 9/120
	I0812 11:36:01.873898   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 10/120
	I0812 11:36:02.875506   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 11/120
	I0812 11:36:03.877764   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 12/120
	I0812 11:36:04.879226   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 13/120
	I0812 11:36:05.881491   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 14/120
	I0812 11:36:06.883558   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 15/120
	I0812 11:36:07.885011   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 16/120
	I0812 11:36:08.886483   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 17/120
	I0812 11:36:09.887726   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 18/120
	I0812 11:36:10.889166   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 19/120
	I0812 11:36:11.891734   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 20/120
	I0812 11:36:12.893130   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 21/120
	I0812 11:36:13.895417   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 22/120
	I0812 11:36:14.897923   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 23/120
	I0812 11:36:15.899975   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 24/120
	I0812 11:36:16.901315   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 25/120
	I0812 11:36:17.902894   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 26/120
	I0812 11:36:18.904547   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 27/120
	I0812 11:36:19.906185   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 28/120
	I0812 11:36:20.907671   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 29/120
	I0812 11:36:21.909590   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 30/120
	I0812 11:36:22.911490   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 31/120
	I0812 11:36:23.912811   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 32/120
	I0812 11:36:24.914276   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 33/120
	I0812 11:36:25.915840   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 34/120
	I0812 11:36:26.918239   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 35/120
	I0812 11:36:27.919722   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 36/120
	I0812 11:36:28.921368   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 37/120
	I0812 11:36:29.923849   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 38/120
	I0812 11:36:30.925280   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 39/120
	I0812 11:36:31.927388   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 40/120
	I0812 11:36:32.928927   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 41/120
	I0812 11:36:33.930634   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 42/120
	I0812 11:36:34.932124   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 43/120
	I0812 11:36:35.933361   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 44/120
	I0812 11:36:36.935773   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 45/120
	I0812 11:36:37.937230   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 46/120
	I0812 11:36:38.939408   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 47/120
	I0812 11:36:39.940803   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 48/120
	I0812 11:36:40.942245   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 49/120
	I0812 11:36:41.944347   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 50/120
	I0812 11:36:42.945676   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 51/120
	I0812 11:36:43.947359   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 52/120
	I0812 11:36:44.948624   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 53/120
	I0812 11:36:45.950210   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 54/120
	I0812 11:36:46.952206   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 55/120
	I0812 11:36:47.953883   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 56/120
	I0812 11:36:48.955080   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 57/120
	I0812 11:36:49.956396   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 58/120
	I0812 11:36:50.957796   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 59/120
	I0812 11:36:51.959718   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 60/120
	I0812 11:36:52.961149   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 61/120
	I0812 11:36:53.963347   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 62/120
	I0812 11:36:54.964976   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 63/120
	I0812 11:36:55.966346   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 64/120
	I0812 11:36:56.968623   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 65/120
	I0812 11:36:57.970279   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 66/120
	I0812 11:36:58.971940   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 67/120
	I0812 11:36:59.973437   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 68/120
	I0812 11:37:00.974772   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 69/120
	I0812 11:37:01.977106   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 70/120
	I0812 11:37:02.979489   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 71/120
	I0812 11:37:03.980795   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 72/120
	I0812 11:37:04.982127   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 73/120
	I0812 11:37:05.983519   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 74/120
	I0812 11:37:06.985626   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 75/120
	I0812 11:37:07.987614   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 76/120
	I0812 11:37:08.988949   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 77/120
	I0812 11:37:09.990509   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 78/120
	I0812 11:37:10.992086   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 79/120
	I0812 11:37:11.993481   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 80/120
	I0812 11:37:12.994735   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 81/120
	I0812 11:37:13.996229   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 82/120
	I0812 11:37:14.997682   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 83/120
	I0812 11:37:16.000127   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 84/120
	I0812 11:37:17.002338   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 85/120
	I0812 11:37:18.003971   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 86/120
	I0812 11:37:19.005565   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 87/120
	I0812 11:37:20.006890   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 88/120
	I0812 11:37:21.008305   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 89/120
	I0812 11:37:22.010632   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 90/120
	I0812 11:37:23.012250   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 91/120
	I0812 11:37:24.013631   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 92/120
	I0812 11:37:25.015971   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 93/120
	I0812 11:37:26.017916   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 94/120
	I0812 11:37:27.019888   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 95/120
	I0812 11:37:28.021669   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 96/120
	I0812 11:37:29.023524   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 97/120
	I0812 11:37:30.025280   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 98/120
	I0812 11:37:31.027145   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 99/120
	I0812 11:37:32.029436   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 100/120
	I0812 11:37:33.031424   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 101/120
	I0812 11:37:34.033198   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 102/120
	I0812 11:37:35.035078   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 103/120
	I0812 11:37:36.036705   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 104/120
	I0812 11:37:37.038237   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 105/120
	I0812 11:37:38.039472   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 106/120
	I0812 11:37:39.041150   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 107/120
	I0812 11:37:40.043517   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 108/120
	I0812 11:37:41.045073   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 109/120
	I0812 11:37:42.047356   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 110/120
	I0812 11:37:43.048827   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 111/120
	I0812 11:37:44.051055   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 112/120
	I0812 11:37:45.052612   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 113/120
	I0812 11:37:46.054015   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 114/120
	I0812 11:37:47.056122   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 115/120
	I0812 11:37:48.058133   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 116/120
	I0812 11:37:49.059469   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 117/120
	I0812 11:37:50.060906   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 118/120
	I0812 11:37:51.062300   55901 main.go:141] libmachine: (embed-certs-093615) Waiting for machine to stop 119/120
	I0812 11:37:52.063510   55901 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 11:37:52.063591   55901 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 11:37:52.065716   55901 out.go:177] 
	W0812 11:37:52.067257   55901 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 11:37:52.067276   55901 out.go:239] * 
	* 
	W0812 11:37:52.069875   55901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:37:52.071411   55901 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-093615 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615: exit status 3 (18.420566369s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:38:10.493399   56623 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host
	E0812 11:38:10.493427   56623 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-093615" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-993542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-993542 --alsologtostderr -v=3: exit status 82 (2m0.4865668s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-993542"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:36:54.656602   56268 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:36:54.656845   56268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:36:54.656853   56268 out.go:304] Setting ErrFile to fd 2...
	I0812 11:36:54.656857   56268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:36:54.657072   56268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:36:54.657301   56268 out.go:298] Setting JSON to false
	I0812 11:36:54.657375   56268 mustload.go:65] Loading cluster: no-preload-993542
	I0812 11:36:54.657703   56268 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:36:54.657772   56268 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/config.json ...
	I0812 11:36:54.657944   56268 mustload.go:65] Loading cluster: no-preload-993542
	I0812 11:36:54.658044   56268 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:36:54.658083   56268 stop.go:39] StopHost: no-preload-993542
	I0812 11:36:54.658469   56268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:36:54.658512   56268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:36:54.673249   56268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0812 11:36:54.673772   56268 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:36:54.674320   56268 main.go:141] libmachine: Using API Version  1
	I0812 11:36:54.674341   56268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:36:54.674670   56268 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:36:54.677580   56268 out.go:177] * Stopping node "no-preload-993542"  ...
	I0812 11:36:54.679444   56268 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 11:36:54.679472   56268 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:36:54.679746   56268 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 11:36:54.679774   56268 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:36:54.682515   56268 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:36:54.682896   56268 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:35:19 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:36:54.682938   56268 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:36:54.683012   56268 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:36:54.683187   56268 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:36:54.683325   56268 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:36:54.683434   56268 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:36:54.769324   56268 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 11:36:54.830999   56268 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 11:36:54.879697   56268 main.go:141] libmachine: Stopping "no-preload-993542"...
	I0812 11:36:54.879742   56268 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:36:54.881120   56268 main.go:141] libmachine: (no-preload-993542) Calling .Stop
	I0812 11:36:54.884662   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 0/120
	I0812 11:36:55.886251   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 1/120
	I0812 11:36:56.887953   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 2/120
	I0812 11:36:57.890257   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 3/120
	I0812 11:36:58.892139   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 4/120
	I0812 11:36:59.894178   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 5/120
	I0812 11:37:00.895727   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 6/120
	I0812 11:37:01.897379   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 7/120
	I0812 11:37:02.899735   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 8/120
	I0812 11:37:03.900978   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 9/120
	I0812 11:37:04.902324   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 10/120
	I0812 11:37:05.903776   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 11/120
	I0812 11:37:06.905222   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 12/120
	I0812 11:37:07.906888   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 13/120
	I0812 11:37:08.908507   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 14/120
	I0812 11:37:09.910659   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 15/120
	I0812 11:37:10.912396   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 16/120
	I0812 11:37:11.913896   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 17/120
	I0812 11:37:12.915403   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 18/120
	I0812 11:37:13.916915   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 19/120
	I0812 11:37:14.918989   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 20/120
	I0812 11:37:15.920393   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 21/120
	I0812 11:37:16.921952   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 22/120
	I0812 11:37:17.923403   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 23/120
	I0812 11:37:18.925187   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 24/120
	I0812 11:37:19.927519   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 25/120
	I0812 11:37:20.929279   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 26/120
	I0812 11:37:21.931980   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 27/120
	I0812 11:37:22.933762   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 28/120
	I0812 11:37:23.935227   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 29/120
	I0812 11:37:24.937979   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 30/120
	I0812 11:37:25.939739   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 31/120
	I0812 11:37:26.941352   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 32/120
	I0812 11:37:27.942761   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 33/120
	I0812 11:37:28.945281   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 34/120
	I0812 11:37:29.947401   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 35/120
	I0812 11:37:30.949431   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 36/120
	I0812 11:37:31.951137   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 37/120
	I0812 11:37:32.952380   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 38/120
	I0812 11:37:33.954156   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 39/120
	I0812 11:37:34.955601   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 40/120
	I0812 11:37:35.957344   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 41/120
	I0812 11:37:36.958767   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 42/120
	I0812 11:37:37.960264   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 43/120
	I0812 11:37:38.961696   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 44/120
	I0812 11:37:39.963107   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 45/120
	I0812 11:37:40.964604   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 46/120
	I0812 11:37:41.966214   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 47/120
	I0812 11:37:42.967602   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 48/120
	I0812 11:37:43.968973   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 49/120
	I0812 11:37:44.970397   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 50/120
	I0812 11:37:45.971951   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 51/120
	I0812 11:37:46.973643   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 52/120
	I0812 11:37:47.975433   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 53/120
	I0812 11:37:48.976802   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 54/120
	I0812 11:37:49.979020   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 55/120
	I0812 11:37:50.980782   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 56/120
	I0812 11:37:51.982483   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 57/120
	I0812 11:37:52.983922   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 58/120
	I0812 11:37:53.985454   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 59/120
	I0812 11:37:54.986911   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 60/120
	I0812 11:37:55.988172   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 61/120
	I0812 11:37:56.989566   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 62/120
	I0812 11:37:57.991002   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 63/120
	I0812 11:37:58.992604   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 64/120
	I0812 11:37:59.994925   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 65/120
	I0812 11:38:00.996320   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 66/120
	I0812 11:38:01.997699   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 67/120
	I0812 11:38:02.999119   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 68/120
	I0812 11:38:04.001471   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 69/120
	I0812 11:38:05.003128   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 70/120
	I0812 11:38:06.004659   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 71/120
	I0812 11:38:07.006093   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 72/120
	I0812 11:38:08.007721   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 73/120
	I0812 11:38:09.010026   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 74/120
	I0812 11:38:10.012322   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 75/120
	I0812 11:38:11.014905   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 76/120
	I0812 11:38:12.016345   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 77/120
	I0812 11:38:13.017961   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 78/120
	I0812 11:38:14.019265   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 79/120
	I0812 11:38:15.021711   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 80/120
	I0812 11:38:16.023341   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 81/120
	I0812 11:38:17.025059   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 82/120
	I0812 11:38:18.026713   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 83/120
	I0812 11:38:19.028598   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 84/120
	I0812 11:38:20.030946   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 85/120
	I0812 11:38:21.033776   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 86/120
	I0812 11:38:22.035204   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 87/120
	I0812 11:38:23.037136   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 88/120
	I0812 11:38:24.038767   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 89/120
	I0812 11:38:25.041141   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 90/120
	I0812 11:38:26.042988   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 91/120
	I0812 11:38:27.044349   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 92/120
	I0812 11:38:28.046005   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 93/120
	I0812 11:38:29.047531   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 94/120
	I0812 11:38:30.049818   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 95/120
	I0812 11:38:31.051388   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 96/120
	I0812 11:38:32.053197   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 97/120
	I0812 11:38:33.055610   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 98/120
	I0812 11:38:34.057069   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 99/120
	I0812 11:38:35.059178   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 100/120
	I0812 11:38:36.061633   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 101/120
	I0812 11:38:37.063274   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 102/120
	I0812 11:38:38.064952   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 103/120
	I0812 11:38:39.066646   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 104/120
	I0812 11:38:40.068896   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 105/120
	I0812 11:38:41.070363   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 106/120
	I0812 11:38:42.071869   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 107/120
	I0812 11:38:43.073477   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 108/120
	I0812 11:38:44.075518   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 109/120
	I0812 11:38:45.076790   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 110/120
	I0812 11:38:46.079036   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 111/120
	I0812 11:38:47.080533   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 112/120
	I0812 11:38:48.081839   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 113/120
	I0812 11:38:49.083157   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 114/120
	I0812 11:38:50.085167   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 115/120
	I0812 11:38:51.086822   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 116/120
	I0812 11:38:52.088465   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 117/120
	I0812 11:38:53.090026   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 118/120
	I0812 11:38:54.091882   56268 main.go:141] libmachine: (no-preload-993542) Waiting for machine to stop 119/120
	I0812 11:38:55.092629   56268 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 11:38:55.092685   56268 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 11:38:55.094856   56268 out.go:177] 
	W0812 11:38:55.096622   56268 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 11:38:55.096648   56268 out.go:239] * 
	* 
	W0812 11:38:55.099361   56268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:38:55.100794   56268 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-993542 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542: exit status 3 (18.622467953s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:39:13.725250   57015 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E0812 11:39:13.725270   57015 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-993542" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-835962 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-835962 create -f testdata/busybox.yaml: exit status 1 (52.808334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-835962" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-835962 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 6 (228.761115ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:37:14.401135   56392 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-835962" does not appear in /home/jenkins/minikube-integration/19409-3774/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-835962" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 6 (230.70068ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:37:14.629420   56422 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-835962" does not appear in /home/jenkins/minikube-integration/19409-3774/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-835962" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-835962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-835962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.804235881s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-835962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-835962 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-835962 describe deploy/metrics-server -n kube-system: exit status 1 (47.41583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-835962" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-835962 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 6 (233.461584ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:38:58.712266   57070 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-835962" does not appear in /home/jenkins/minikube-integration/19409-3774/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-835962" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615: exit status 3 (3.17153393s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:38:13.665223   56717 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host
	E0812 11:38:13.665243   56717 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-093615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-093615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149228227s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-093615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615: exit status 3 (3.062783634s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:38:22.877327   56799 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host
	E0812 11:38:22.877352   56799 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-093615" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (740.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m18.952657731s)

                                                
                                                
-- stdout --
	* [old-k8s-version-835962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-835962" primary control-plane node in "old-k8s-version-835962" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-835962" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:39:04.267946   57198 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:39:04.268232   57198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:39:04.268243   57198 out.go:304] Setting ErrFile to fd 2...
	I0812 11:39:04.268248   57198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:39:04.268506   57198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:39:04.269124   57198 out.go:298] Setting JSON to false
	I0812 11:39:04.270163   57198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4885,"bootTime":1723457859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:39:04.270225   57198 start.go:139] virtualization: kvm guest
	I0812 11:39:04.272642   57198 out.go:177] * [old-k8s-version-835962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:39:04.274125   57198 notify.go:220] Checking for updates...
	I0812 11:39:04.274170   57198 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:39:04.275658   57198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:39:04.277167   57198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:39:04.278719   57198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:39:04.280232   57198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:39:04.281947   57198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:39:04.284055   57198 config.go:182] Loaded profile config "old-k8s-version-835962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:39:04.284518   57198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:04.284613   57198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:04.301959   57198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0812 11:39:04.302418   57198 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:04.303050   57198 main.go:141] libmachine: Using API Version  1
	I0812 11:39:04.303082   57198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:04.303461   57198 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:04.303656   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:39:04.305580   57198 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 11:39:04.306948   57198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:39:04.307390   57198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:39:04.307430   57198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:39:04.322711   57198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0812 11:39:04.323143   57198 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:39:04.323639   57198 main.go:141] libmachine: Using API Version  1
	I0812 11:39:04.323659   57198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:39:04.324008   57198 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:39:04.324187   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:39:04.366911   57198 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:39:04.368545   57198 start.go:297] selected driver: kvm2
	I0812 11:39:04.368561   57198 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:39:04.368699   57198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:39:04.369437   57198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:39:04.369524   57198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:39:04.385687   57198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:39:04.386094   57198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:39:04.386121   57198 cni.go:84] Creating CNI manager for ""
	I0812 11:39:04.386133   57198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:39:04.386187   57198 start.go:340] cluster config:
	{Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:39:04.386334   57198 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:39:04.389877   57198 out.go:177] * Starting "old-k8s-version-835962" primary control-plane node in "old-k8s-version-835962" cluster
	I0812 11:39:04.391544   57198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:39:04.391594   57198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:39:04.391614   57198 cache.go:56] Caching tarball of preloaded images
	I0812 11:39:04.391703   57198 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:39:04.391716   57198 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0812 11:39:04.391827   57198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/config.json ...
	I0812 11:39:04.392056   57198 start.go:360] acquireMachinesLock for old-k8s-version-835962: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:43:00.474189   57198 start.go:364] duration metric: took 3m56.082101292s to acquireMachinesLock for "old-k8s-version-835962"
	I0812 11:43:00.474256   57198 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:43:00.474267   57198 fix.go:54] fixHost starting: 
	I0812 11:43:00.474602   57198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:43:00.474633   57198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:43:00.490357   57198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I0812 11:43:00.490845   57198 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:43:00.491258   57198 main.go:141] libmachine: Using API Version  1
	I0812 11:43:00.491279   57198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:43:00.491608   57198 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:43:00.491829   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:00.491946   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetState
	I0812 11:43:00.493673   57198 fix.go:112] recreateIfNeeded on old-k8s-version-835962: state=Stopped err=<nil>
	I0812 11:43:00.493713   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	W0812 11:43:00.493856   57198 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:43:00.495368   57198 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-835962" ...
	I0812 11:43:00.501821   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .Start
	I0812 11:43:00.502115   57198 main.go:141] libmachine: (old-k8s-version-835962) Ensuring networks are active...
	I0812 11:43:00.502988   57198 main.go:141] libmachine: (old-k8s-version-835962) Ensuring network default is active
	I0812 11:43:00.503360   57198 main.go:141] libmachine: (old-k8s-version-835962) Ensuring network mk-old-k8s-version-835962 is active
	I0812 11:43:00.503736   57198 main.go:141] libmachine: (old-k8s-version-835962) Getting domain xml...
	I0812 11:43:00.504527   57198 main.go:141] libmachine: (old-k8s-version-835962) Creating domain...
	I0812 11:43:01.765725   57198 main.go:141] libmachine: (old-k8s-version-835962) Waiting to get IP...
	I0812 11:43:01.766717   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:01.767152   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:01.767227   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:01.767132   58367 retry.go:31] will retry after 250.85613ms: waiting for machine to come up
	I0812 11:43:02.019909   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:02.020400   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:02.020431   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:02.020375   58367 retry.go:31] will retry after 298.937462ms: waiting for machine to come up
	I0812 11:43:02.320918   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:02.321356   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:02.321386   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:02.321305   58367 retry.go:31] will retry after 450.235509ms: waiting for machine to come up
	I0812 11:43:02.772940   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:02.773325   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:02.773354   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:02.773260   58367 retry.go:31] will retry after 571.139907ms: waiting for machine to come up
	I0812 11:43:03.346002   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:03.346441   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:03.346533   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:03.346370   58367 retry.go:31] will retry after 596.941736ms: waiting for machine to come up
	I0812 11:43:03.945240   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:03.945730   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:03.945754   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:03.945677   58367 retry.go:31] will retry after 661.000752ms: waiting for machine to come up
	I0812 11:43:04.608747   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:04.609113   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:04.609136   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:04.609083   58367 retry.go:31] will retry after 1.005354168s: waiting for machine to come up
	I0812 11:43:05.615942   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:05.616346   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:05.616387   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:05.616253   58367 retry.go:31] will retry after 1.030832314s: waiting for machine to come up
	I0812 11:43:06.648440   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:06.648986   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:06.649020   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:06.648932   58367 retry.go:31] will retry after 1.713425653s: waiting for machine to come up
	I0812 11:43:08.364857   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:08.365294   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:08.365313   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:08.365252   58367 retry.go:31] will retry after 2.063972146s: waiting for machine to come up
	I0812 11:43:10.431728   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:10.432342   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:10.432372   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:10.432293   58367 retry.go:31] will retry after 2.829019013s: waiting for machine to come up
	I0812 11:43:13.264939   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:13.265477   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | unable to find current IP address of domain old-k8s-version-835962 in network mk-old-k8s-version-835962
	I0812 11:43:13.265499   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | I0812 11:43:13.265421   58367 retry.go:31] will retry after 3.475057772s: waiting for machine to come up
	I0812 11:43:16.741835   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.742407   57198 main.go:141] libmachine: (old-k8s-version-835962) Found IP for machine: 192.168.39.17
	I0812 11:43:16.742449   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has current primary IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.742466   57198 main.go:141] libmachine: (old-k8s-version-835962) Reserving static IP address...
	I0812 11:43:16.742839   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "old-k8s-version-835962", mac: "52:54:00:a2:4c:33", ip: "192.168.39.17"} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:16.742863   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | skip adding static IP to network mk-old-k8s-version-835962 - found existing host DHCP lease matching {name: "old-k8s-version-835962", mac: "52:54:00:a2:4c:33", ip: "192.168.39.17"}
	I0812 11:43:16.742877   57198 main.go:141] libmachine: (old-k8s-version-835962) Reserved static IP address: 192.168.39.17
	I0812 11:43:16.742888   57198 main.go:141] libmachine: (old-k8s-version-835962) Waiting for SSH to be available...
	I0812 11:43:16.742896   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | Getting to WaitForSSH function...
	I0812 11:43:16.745461   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.745813   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:16.745849   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.745984   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | Using SSH client type: external
	I0812 11:43:16.746021   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa (-rw-------)
	I0812 11:43:16.746055   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:43:16.746070   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | About to run SSH command:
	I0812 11:43:16.746106   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | exit 0
	I0812 11:43:16.872985   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | SSH cmd err, output: <nil>: 
	I0812 11:43:16.873373   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetConfigRaw
	I0812 11:43:16.873989   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:43:16.876615   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.877030   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:16.877066   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.877340   57198 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/config.json ...
	I0812 11:43:16.877559   57198 machine.go:94] provisionDockerMachine start ...
	I0812 11:43:16.877578   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:16.877797   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:16.880217   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.880584   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:16.880619   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.880737   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:16.880946   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:16.881107   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:16.881275   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:16.881453   57198 main.go:141] libmachine: Using SSH client type: native
	I0812 11:43:16.881657   57198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:43:16.881669   57198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:43:16.989340   57198 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 11:43:16.989376   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:43:16.989680   57198 buildroot.go:166] provisioning hostname "old-k8s-version-835962"
	I0812 11:43:16.989707   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:43:16.989945   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:16.992853   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.993313   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:16.993346   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:16.993452   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:16.993633   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:16.993860   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:16.994022   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:16.994230   57198 main.go:141] libmachine: Using SSH client type: native
	I0812 11:43:16.994391   57198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:43:16.994404   57198 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-835962 && echo "old-k8s-version-835962" | sudo tee /etc/hostname
	I0812 11:43:17.115149   57198 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-835962
	
	I0812 11:43:17.115193   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.118447   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.118782   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.118818   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.118990   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:17.119185   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.119365   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.119488   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:17.119620   57198 main.go:141] libmachine: Using SSH client type: native
	I0812 11:43:17.119786   57198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:43:17.119803   57198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-835962' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-835962/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-835962' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:43:17.237289   57198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:43:17.237331   57198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:43:17.237353   57198 buildroot.go:174] setting up certificates
	I0812 11:43:17.237362   57198 provision.go:84] configureAuth start
	I0812 11:43:17.237371   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetMachineName
	I0812 11:43:17.237730   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:43:17.240241   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.240626   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.240682   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.240713   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.243091   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.243427   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.243450   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.243687   57198 provision.go:143] copyHostCerts
	I0812 11:43:17.243761   57198 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:43:17.243775   57198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:43:17.243856   57198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:43:17.243966   57198 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:43:17.243976   57198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:43:17.244032   57198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:43:17.244108   57198 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:43:17.244118   57198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:43:17.244157   57198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:43:17.244228   57198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-835962 san=[127.0.0.1 192.168.39.17 localhost minikube old-k8s-version-835962]
	I0812 11:43:17.396184   57198 provision.go:177] copyRemoteCerts
	I0812 11:43:17.396257   57198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:43:17.396284   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.399279   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.399735   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.399771   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.399982   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:17.400297   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.400480   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:17.400712   57198 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:43:17.483072   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:43:17.507414   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0812 11:43:17.531549   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:43:17.555458   57198 provision.go:87] duration metric: took 318.081867ms to configureAuth
	I0812 11:43:17.555495   57198 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:43:17.555796   57198 config.go:182] Loaded profile config "old-k8s-version-835962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:43:17.555871   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.558892   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.559269   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.559308   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.559474   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:17.559683   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.559877   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.559993   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:17.560182   57198 main.go:141] libmachine: Using SSH client type: native
	I0812 11:43:17.560359   57198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:43:17.560373   57198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:43:17.828246   57198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:43:17.828274   57198 machine.go:97] duration metric: took 950.700986ms to provisionDockerMachine
	I0812 11:43:17.828285   57198 start.go:293] postStartSetup for "old-k8s-version-835962" (driver="kvm2")
	I0812 11:43:17.828295   57198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:43:17.828310   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:17.828660   57198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:43:17.828685   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.831429   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.831768   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.831792   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.831949   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:17.832154   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.832361   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:17.832531   57198 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:43:17.916291   57198 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:43:17.920371   57198 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:43:17.920403   57198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:43:17.920482   57198 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:43:17.920598   57198 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:43:17.920727   57198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:43:17.930272   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:43:17.953064   57198 start.go:296] duration metric: took 124.767156ms for postStartSetup
	I0812 11:43:17.953115   57198 fix.go:56] duration metric: took 17.478846567s for fixHost
	I0812 11:43:17.953140   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:17.956055   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.956356   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:17.956388   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:17.956532   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:17.956755   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.956935   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:17.957064   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:17.957252   57198 main.go:141] libmachine: Using SSH client type: native
	I0812 11:43:17.957495   57198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0812 11:43:17.957509   57198 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 11:43:18.065593   57198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723462998.024170578
	
	I0812 11:43:18.065623   57198 fix.go:216] guest clock: 1723462998.024170578
	I0812 11:43:18.065631   57198 fix.go:229] Guest: 2024-08-12 11:43:18.024170578 +0000 UTC Remote: 2024-08-12 11:43:17.953120869 +0000 UTC m=+253.719901812 (delta=71.049709ms)
	I0812 11:43:18.065668   57198 fix.go:200] guest clock delta is within tolerance: 71.049709ms
	I0812 11:43:18.065673   57198 start.go:83] releasing machines lock for "old-k8s-version-835962", held for 17.591448501s
	I0812 11:43:18.065696   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:18.065990   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:43:18.068919   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.069312   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:18.069344   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.069518   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:18.070010   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:18.070163   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .DriverName
	I0812 11:43:18.070214   57198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:43:18.070258   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:18.070400   57198 ssh_runner.go:195] Run: cat /version.json
	I0812 11:43:18.070429   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHHostname
	I0812 11:43:18.073278   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.073325   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.073654   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:18.073684   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.073711   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:18.073728   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:18.073861   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:18.073949   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHPort
	I0812 11:43:18.074093   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:18.074100   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHKeyPath
	I0812 11:43:18.074278   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:18.074280   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetSSHUsername
	I0812 11:43:18.074456   57198 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:43:18.074447   57198 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/old-k8s-version-835962/id_rsa Username:docker}
	I0812 11:43:18.193917   57198 ssh_runner.go:195] Run: systemctl --version
	I0812 11:43:18.200005   57198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:43:18.353028   57198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:43:18.358637   57198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:43:18.358731   57198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:43:18.374615   57198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:43:18.374644   57198 start.go:495] detecting cgroup driver to use...
	I0812 11:43:18.374717   57198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:43:18.392221   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:43:18.406983   57198 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:43:18.407052   57198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:43:18.421546   57198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:43:18.435846   57198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:43:18.549137   57198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:43:18.714602   57198 docker.go:233] disabling docker service ...
	I0812 11:43:18.714661   57198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:43:18.728822   57198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:43:18.741885   57198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:43:18.860649   57198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:43:18.970429   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:43:18.985435   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:43:19.003604   57198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0812 11:43:19.003658   57198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:43:19.014141   57198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:43:19.014200   57198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:43:19.024570   57198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:43:19.034826   57198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:43:19.045111   57198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:43:19.055646   57198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:43:19.064912   57198 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:43:19.064977   57198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:43:19.077126   57198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:43:19.087255   57198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:43:19.199431   57198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:43:19.332722   57198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:43:19.332811   57198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:43:19.338121   57198 start.go:563] Will wait 60s for crictl version
	I0812 11:43:19.338182   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:19.342386   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:43:19.389822   57198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:43:19.389907   57198 ssh_runner.go:195] Run: crio --version
	I0812 11:43:19.417620   57198 ssh_runner.go:195] Run: crio --version
	I0812 11:43:19.447793   57198 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0812 11:43:19.449122   57198 main.go:141] libmachine: (old-k8s-version-835962) Calling .GetIP
	I0812 11:43:19.452596   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:19.453092   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:4c:33", ip: ""} in network mk-old-k8s-version-835962: {Iface:virbr4 ExpiryTime:2024-08-12 12:43:11 +0000 UTC Type:0 Mac:52:54:00:a2:4c:33 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:old-k8s-version-835962 Clientid:01:52:54:00:a2:4c:33}
	I0812 11:43:19.453123   57198 main.go:141] libmachine: (old-k8s-version-835962) DBG | domain old-k8s-version-835962 has defined IP address 192.168.39.17 and MAC address 52:54:00:a2:4c:33 in network mk-old-k8s-version-835962
	I0812 11:43:19.453370   57198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 11:43:19.457744   57198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:43:19.470145   57198 kubeadm.go:883] updating cluster {Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:43:19.470283   57198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:43:19.470357   57198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:43:19.523022   57198 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:43:19.523095   57198 ssh_runner.go:195] Run: which lz4
	I0812 11:43:19.527390   57198 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 11:43:19.532348   57198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:43:19.532389   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0812 11:43:21.051991   57198 crio.go:462] duration metric: took 1.524644133s to copy over tarball
	I0812 11:43:21.052068   57198 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:43:24.046334   57198 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994233907s)
	I0812 11:43:24.046385   57198 crio.go:469] duration metric: took 2.994364389s to extract the tarball
	I0812 11:43:24.046392   57198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:43:24.087843   57198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:43:24.120989   57198 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 11:43:24.121018   57198 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 11:43:24.121069   57198 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:43:24.121095   57198 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.121127   57198 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.121135   57198 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.121186   57198 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0812 11:43:24.121093   57198 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.121265   57198 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.121411   57198 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.122909   57198 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.122920   57198 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0812 11:43:24.122912   57198 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.122921   57198 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:43:24.122986   57198 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.123006   57198 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.123023   57198 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.123050   57198 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.363087   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0812 11:43:24.405355   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.406396   57198 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0812 11:43:24.406437   57198 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0812 11:43:24.406477   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.416307   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.417346   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.435090   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.442912   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.446010   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.497418   57198 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0812 11:43:24.497467   57198 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.497515   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.497520   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:43:24.516634   57198 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0812 11:43:24.516675   57198 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.516720   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.539233   57198 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0812 11:43:24.539293   57198 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.539345   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.596835   57198 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0812 11:43:24.596897   57198 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.596934   57198 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0812 11:43:24.596950   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.596971   57198 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.596949   57198 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0812 11:43:24.597019   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.597028   57198 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.597056   57198 ssh_runner.go:195] Run: which crictl
	I0812 11:43:24.608309   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:43:24.608331   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.608372   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.608405   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.612545   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.612575   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.612604   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.772224   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 11:43:24.772290   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.772242   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.772340   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.772396   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.781565   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.781572   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.927881   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 11:43:24.927904   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0812 11:43:24.927888   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 11:43:24.927986   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 11:43:24.928061   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 11:43:24.928102   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 11:43:24.928142   57198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 11:43:24.976586   57198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:43:25.086074   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0812 11:43:25.086122   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0812 11:43:25.086188   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0812 11:43:25.086342   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0812 11:43:25.086389   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0812 11:43:25.086443   57198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0812 11:43:25.196233   57198 cache_images.go:92] duration metric: took 1.075199302s to LoadCachedImages
	W0812 11:43:25.196338   57198 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19409-3774/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0812 11:43:25.196352   57198 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.20.0 crio true true} ...
	I0812 11:43:25.196450   57198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-835962 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:43:25.196518   57198 ssh_runner.go:195] Run: crio config
	I0812 11:43:25.250313   57198 cni.go:84] Creating CNI manager for ""
	I0812 11:43:25.250341   57198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:43:25.250353   57198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:43:25.250374   57198 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-835962 NodeName:old-k8s-version-835962 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0812 11:43:25.250534   57198 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-835962"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:43:25.250609   57198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0812 11:43:25.260968   57198 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:43:25.261059   57198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:43:25.270556   57198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0812 11:43:25.291589   57198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:43:25.312240   57198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0812 11:43:25.333870   57198 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0812 11:43:25.338072   57198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:43:25.351316   57198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:43:25.469404   57198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:43:25.488697   57198 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962 for IP: 192.168.39.17
	I0812 11:43:25.488723   57198 certs.go:194] generating shared ca certs ...
	I0812 11:43:25.488742   57198 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:43:25.489007   57198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:43:25.489089   57198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:43:25.489106   57198 certs.go:256] generating profile certs ...
	I0812 11:43:25.489264   57198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.key
	I0812 11:43:25.489438   57198 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key.9ec5808d
	I0812 11:43:25.489537   57198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key
	I0812 11:43:25.489705   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:43:25.489748   57198 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:43:25.489763   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:43:25.489804   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:43:25.489844   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:43:25.489885   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:43:25.489950   57198 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:43:25.490643   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:43:25.547688   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:43:25.576839   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:43:25.609814   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:43:25.643687   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0812 11:43:25.673482   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:43:25.704167   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:43:25.737190   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 11:43:25.763306   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:43:25.787179   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:43:25.811679   57198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:43:25.836727   57198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:43:25.854164   57198 ssh_runner.go:195] Run: openssl version
	I0812 11:43:25.859976   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:43:25.871185   57198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:43:25.876986   57198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:43:25.877050   57198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:43:25.884883   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:43:25.899478   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:43:25.910621   57198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:43:25.915128   57198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:43:25.915198   57198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:43:25.920907   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:43:25.931651   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:43:25.942212   57198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:43:25.946914   57198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:43:25.946968   57198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:43:25.952691   57198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:43:25.963255   57198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:43:25.967613   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:43:25.973467   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:43:25.979438   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:43:25.985810   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:43:25.991701   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:43:25.997693   57198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:43:26.003837   57198 kubeadm.go:392] StartCluster: {Name:old-k8s-version-835962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-835962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:43:26.003916   57198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:43:26.004005   57198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:43:26.040567   57198 cri.go:89] found id: ""
	I0812 11:43:26.040644   57198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:43:26.050868   57198 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 11:43:26.050889   57198 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 11:43:26.050933   57198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 11:43:26.060803   57198 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:43:26.061747   57198 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-835962" does not appear in /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:43:26.062307   57198 kubeconfig.go:62] /home/jenkins/minikube-integration/19409-3774/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-835962" cluster setting kubeconfig missing "old-k8s-version-835962" context setting]
	I0812 11:43:26.065034   57198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:43:26.104672   57198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 11:43:26.115753   57198 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.17
	I0812 11:43:26.115799   57198 kubeadm.go:1160] stopping kube-system containers ...
	I0812 11:43:26.115812   57198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 11:43:26.115870   57198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:43:26.158596   57198 cri.go:89] found id: ""
	I0812 11:43:26.158679   57198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 11:43:26.174608   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:43:26.184457   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:43:26.184481   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:43:26.184527   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:43:26.193423   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:43:26.193503   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:43:26.202236   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:43:26.211226   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:43:26.211290   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:43:26.220882   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:43:26.229667   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:43:26.229726   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:43:26.240860   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:43:26.250176   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:43:26.250274   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:43:26.259630   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:43:26.269736   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:43:26.393762   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:43:27.199743   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:43:27.402999   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:43:27.494799   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:43:27.597010   57198 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:43:27.597105   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:28.097779   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:28.597452   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:29.097973   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:29.597405   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:30.098159   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:30.597758   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:31.097397   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:31.598146   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:32.097923   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:32.598107   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:33.097265   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:33.598131   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:34.097513   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:34.598131   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:35.098058   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:35.597986   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:36.097347   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:36.597816   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:37.097447   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:37.597876   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:38.097569   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:38.597607   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:39.097874   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:39.597655   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:40.097613   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:40.597195   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:41.097984   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:41.598038   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:42.098084   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:42.598079   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:43.097532   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:43.597613   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:44.098110   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:44.598144   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:45.097676   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:45.597716   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:46.097600   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:46.597919   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:47.098129   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:47.597792   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:48.098138   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:48.597932   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:49.097648   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:49.597891   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:50.097339   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:50.598112   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:51.098091   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:51.597863   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:52.098137   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:52.597228   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:53.097222   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:53.598115   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:54.097525   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:54.598097   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:55.097922   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:55.598204   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:56.098077   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:56.598162   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:57.097609   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:57.598129   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:58.097831   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:58.598103   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:59.097563   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:43:59.598116   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:00.098069   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:00.597567   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:01.098097   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:01.597833   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:02.097728   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:02.598086   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:03.098102   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:03.598233   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:04.097920   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:04.597406   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:05.098047   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:05.597797   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:06.097984   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:06.598176   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:07.098089   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:07.597615   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:08.097966   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:08.598116   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:09.097206   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:09.597824   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:10.098171   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:10.597973   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:11.097261   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:11.597311   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:12.097760   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:12.597475   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:13.098144   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:13.598134   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:14.097290   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:14.598175   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:15.097696   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:15.598034   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:16.098159   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:16.597694   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:17.097238   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:17.597347   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:18.097606   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:18.598057   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:19.097382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:19.598110   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:20.097874   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:20.597724   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:21.097157   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:21.597252   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:22.098103   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:22.597976   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:23.098068   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:23.598142   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:24.097575   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:24.597232   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:25.098092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:25.597668   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:26.097431   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:26.597816   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:27.097758   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:27.597356   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:27.597445   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:27.660331   57198 cri.go:89] found id: ""
	I0812 11:44:27.660361   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.660372   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:27.660380   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:27.660443   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:27.719983   57198 cri.go:89] found id: ""
	I0812 11:44:27.720021   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.720033   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:27.720044   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:27.720111   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:27.780400   57198 cri.go:89] found id: ""
	I0812 11:44:27.780429   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.780441   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:27.780448   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:27.780513   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:27.834397   57198 cri.go:89] found id: ""
	I0812 11:44:27.834429   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.834437   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:27.834443   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:27.834494   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:27.891529   57198 cri.go:89] found id: ""
	I0812 11:44:27.891556   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.891567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:27.891574   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:27.891639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:27.949506   57198 cri.go:89] found id: ""
	I0812 11:44:27.949534   57198 logs.go:276] 0 containers: []
	W0812 11:44:27.949545   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:27.949552   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:27.949629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:28.001999   57198 cri.go:89] found id: ""
	I0812 11:44:28.002028   57198 logs.go:276] 0 containers: []
	W0812 11:44:28.002040   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:28.002047   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:28.002127   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:28.054298   57198 cri.go:89] found id: ""
	I0812 11:44:28.054327   57198 logs.go:276] 0 containers: []
	W0812 11:44:28.054338   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:28.054349   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:28.054364   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:28.134965   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:28.135009   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:28.153290   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:28.153328   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:28.317063   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:28.317084   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:28.317096   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:28.402829   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:28.402869   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:30.948635   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:30.962413   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:30.962499   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:31.003393   57198 cri.go:89] found id: ""
	I0812 11:44:31.003423   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.003434   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:31.003441   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:31.003489   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:31.046841   57198 cri.go:89] found id: ""
	I0812 11:44:31.046876   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.046891   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:31.046898   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:31.046961   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:31.079964   57198 cri.go:89] found id: ""
	I0812 11:44:31.079995   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.080006   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:31.080013   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:31.080104   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:31.119788   57198 cri.go:89] found id: ""
	I0812 11:44:31.119817   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.119824   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:31.119830   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:31.119881   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:31.157851   57198 cri.go:89] found id: ""
	I0812 11:44:31.157886   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.157897   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:31.157905   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:31.157968   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:31.194913   57198 cri.go:89] found id: ""
	I0812 11:44:31.194941   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.194952   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:31.194960   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:31.195038   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:31.233066   57198 cri.go:89] found id: ""
	I0812 11:44:31.233095   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.233106   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:31.233113   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:31.233179   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:31.268283   57198 cri.go:89] found id: ""
	I0812 11:44:31.268306   57198 logs.go:276] 0 containers: []
	W0812 11:44:31.268315   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:31.268324   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:31.268336   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:31.321896   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:31.321933   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:31.359819   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:31.359854   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:31.457845   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:31.457870   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:31.457885   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:31.536967   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:31.537014   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:34.076593   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:34.091797   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:34.091869   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:34.126847   57198 cri.go:89] found id: ""
	I0812 11:44:34.126878   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.126889   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:34.126897   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:34.126958   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:34.163382   57198 cri.go:89] found id: ""
	I0812 11:44:34.163410   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.163422   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:34.163429   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:34.163488   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:34.195551   57198 cri.go:89] found id: ""
	I0812 11:44:34.195578   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.195590   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:34.195597   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:34.195664   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:34.228816   57198 cri.go:89] found id: ""
	I0812 11:44:34.228847   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.228859   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:34.228883   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:34.228945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:34.269765   57198 cri.go:89] found id: ""
	I0812 11:44:34.269794   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.269805   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:34.269814   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:34.269876   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:34.307614   57198 cri.go:89] found id: ""
	I0812 11:44:34.307646   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.307660   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:34.307669   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:34.307727   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:34.340666   57198 cri.go:89] found id: ""
	I0812 11:44:34.340698   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.340706   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:34.340716   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:34.340776   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:34.375032   57198 cri.go:89] found id: ""
	I0812 11:44:34.375058   57198 logs.go:276] 0 containers: []
	W0812 11:44:34.375066   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:34.375076   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:34.375089   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:34.425864   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:34.425911   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:34.439894   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:34.439923   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:34.519355   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:34.519386   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:34.519401   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:34.599715   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:34.599755   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:37.137892   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:37.154540   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:37.154607   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:37.202907   57198 cri.go:89] found id: ""
	I0812 11:44:37.202937   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.202948   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:37.202970   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:37.203036   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:37.248385   57198 cri.go:89] found id: ""
	I0812 11:44:37.248415   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.248422   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:37.248428   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:37.248482   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:37.297822   57198 cri.go:89] found id: ""
	I0812 11:44:37.297850   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.297857   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:37.297862   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:37.297909   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:37.331518   57198 cri.go:89] found id: ""
	I0812 11:44:37.331546   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.331570   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:37.331578   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:37.331646   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:37.365584   57198 cri.go:89] found id: ""
	I0812 11:44:37.365622   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.365633   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:37.365639   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:37.365689   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:37.400109   57198 cri.go:89] found id: ""
	I0812 11:44:37.400138   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.400146   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:37.400151   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:37.400210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:37.433865   57198 cri.go:89] found id: ""
	I0812 11:44:37.433896   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.433907   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:37.433915   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:37.433976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:37.468422   57198 cri.go:89] found id: ""
	I0812 11:44:37.468454   57198 logs.go:276] 0 containers: []
	W0812 11:44:37.468463   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:37.468546   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:37.468573   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:37.517954   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:37.517988   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:37.532086   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:37.532113   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:37.611693   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:37.611734   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:37.611752   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:37.704628   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:37.704691   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:40.252190   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:40.265334   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:40.265409   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:40.304722   57198 cri.go:89] found id: ""
	I0812 11:44:40.304751   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.304763   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:40.304770   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:40.304820   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:40.339095   57198 cri.go:89] found id: ""
	I0812 11:44:40.339132   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.339155   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:40.339164   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:40.339229   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:40.373084   57198 cri.go:89] found id: ""
	I0812 11:44:40.373108   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.373117   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:40.373123   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:40.373181   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:40.406136   57198 cri.go:89] found id: ""
	I0812 11:44:40.406165   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.406174   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:40.406184   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:40.406276   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:40.440508   57198 cri.go:89] found id: ""
	I0812 11:44:40.440543   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.440565   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:40.440573   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:40.440640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:40.476058   57198 cri.go:89] found id: ""
	I0812 11:44:40.476089   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.476100   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:40.476108   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:40.476170   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:40.512798   57198 cri.go:89] found id: ""
	I0812 11:44:40.512828   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.512839   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:40.512846   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:40.512928   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:40.548552   57198 cri.go:89] found id: ""
	I0812 11:44:40.548582   57198 logs.go:276] 0 containers: []
	W0812 11:44:40.548594   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:40.548611   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:40.548626   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:40.623837   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:40.623875   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:40.662981   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:40.663020   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:40.712746   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:40.712784   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:40.726493   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:40.726528   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:40.796317   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:43.297484   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:43.311524   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:43.311590   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:43.351678   57198 cri.go:89] found id: ""
	I0812 11:44:43.351705   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.351715   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:43.351722   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:43.351787   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:43.390927   57198 cri.go:89] found id: ""
	I0812 11:44:43.390963   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.390975   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:43.390983   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:43.391079   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:43.427424   57198 cri.go:89] found id: ""
	I0812 11:44:43.427453   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.427463   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:43.427471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:43.427538   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:43.463936   57198 cri.go:89] found id: ""
	I0812 11:44:43.463966   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.463976   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:43.463984   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:43.464045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:43.500152   57198 cri.go:89] found id: ""
	I0812 11:44:43.500190   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.500202   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:43.500210   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:43.500295   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:43.534863   57198 cri.go:89] found id: ""
	I0812 11:44:43.534885   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.534892   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:43.534898   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:43.534942   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:43.570225   57198 cri.go:89] found id: ""
	I0812 11:44:43.570257   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.570268   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:43.570275   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:43.570338   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:43.612332   57198 cri.go:89] found id: ""
	I0812 11:44:43.612357   57198 logs.go:276] 0 containers: []
	W0812 11:44:43.612365   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:43.612372   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:43.612386   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:43.664527   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:43.664563   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:43.679865   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:43.679892   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:43.748282   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:43.748309   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:43.748323   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:43.829552   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:43.829589   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:46.377995   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:46.390780   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:46.390852   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:46.425556   57198 cri.go:89] found id: ""
	I0812 11:44:46.425584   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.425592   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:46.425597   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:46.425644   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:46.461796   57198 cri.go:89] found id: ""
	I0812 11:44:46.461823   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.461830   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:46.461835   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:46.461892   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:46.495592   57198 cri.go:89] found id: ""
	I0812 11:44:46.495621   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.495632   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:46.495640   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:46.495699   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:46.530904   57198 cri.go:89] found id: ""
	I0812 11:44:46.530930   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.530941   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:46.530949   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:46.531008   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:46.568563   57198 cri.go:89] found id: ""
	I0812 11:44:46.568592   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.568604   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:46.568610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:46.568670   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:46.602873   57198 cri.go:89] found id: ""
	I0812 11:44:46.602909   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.602921   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:46.602928   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:46.602990   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:46.635734   57198 cri.go:89] found id: ""
	I0812 11:44:46.635763   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.635775   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:46.635782   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:46.635849   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:46.669924   57198 cri.go:89] found id: ""
	I0812 11:44:46.669956   57198 logs.go:276] 0 containers: []
	W0812 11:44:46.669966   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:46.669974   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:46.669986   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:46.722807   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:46.722845   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:46.735804   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:46.735833   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:46.808255   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:46.808279   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:46.808295   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:46.891184   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:46.891247   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:49.430444   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:49.443036   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:49.443109   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:49.477350   57198 cri.go:89] found id: ""
	I0812 11:44:49.477386   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.477398   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:49.477407   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:49.477473   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:49.512132   57198 cri.go:89] found id: ""
	I0812 11:44:49.512164   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.512176   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:49.512184   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:49.512244   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:49.546286   57198 cri.go:89] found id: ""
	I0812 11:44:49.546321   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.546335   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:49.546344   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:49.546424   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:49.584067   57198 cri.go:89] found id: ""
	I0812 11:44:49.584097   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.584107   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:49.584115   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:49.584179   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:49.623903   57198 cri.go:89] found id: ""
	I0812 11:44:49.623931   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.623942   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:49.623949   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:49.624019   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:49.658031   57198 cri.go:89] found id: ""
	I0812 11:44:49.658076   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.658089   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:49.658098   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:49.658158   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:49.693464   57198 cri.go:89] found id: ""
	I0812 11:44:49.693493   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.693504   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:49.693512   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:49.693563   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:49.731124   57198 cri.go:89] found id: ""
	I0812 11:44:49.731156   57198 logs.go:276] 0 containers: []
	W0812 11:44:49.731168   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:49.731179   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:49.731195   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:49.783209   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:49.783247   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:49.798120   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:49.798149   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:49.868433   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:49.868461   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:49.868476   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:49.945388   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:49.945433   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:52.482785   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:52.495578   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:52.495644   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:52.530018   57198 cri.go:89] found id: ""
	I0812 11:44:52.530052   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.530063   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:52.530070   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:52.530138   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:52.564881   57198 cri.go:89] found id: ""
	I0812 11:44:52.564909   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.564917   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:52.564922   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:52.564982   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:52.605067   57198 cri.go:89] found id: ""
	I0812 11:44:52.605099   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.605110   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:52.605127   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:52.605188   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:52.641188   57198 cri.go:89] found id: ""
	I0812 11:44:52.641217   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.641228   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:52.641234   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:52.641284   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:52.677240   57198 cri.go:89] found id: ""
	I0812 11:44:52.677273   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.677282   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:52.677289   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:52.677337   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:52.710152   57198 cri.go:89] found id: ""
	I0812 11:44:52.710179   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.710190   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:52.710197   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:52.710261   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:52.743806   57198 cri.go:89] found id: ""
	I0812 11:44:52.743839   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.743851   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:52.743856   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:52.743910   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:52.778026   57198 cri.go:89] found id: ""
	I0812 11:44:52.778057   57198 logs.go:276] 0 containers: []
	W0812 11:44:52.778069   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:52.778079   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:52.778094   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:52.830074   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:52.830109   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:52.843992   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:52.844019   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:52.912667   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:52.912688   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:52.912702   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:52.994478   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:52.994519   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:55.537785   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:55.553064   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:55.553143   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:55.591184   57198 cri.go:89] found id: ""
	I0812 11:44:55.591213   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.591222   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:55.591229   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:55.591294   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:55.627267   57198 cri.go:89] found id: ""
	I0812 11:44:55.627309   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.627320   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:55.627328   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:55.627389   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:55.660108   57198 cri.go:89] found id: ""
	I0812 11:44:55.660138   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.660146   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:55.660151   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:55.660202   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:55.695679   57198 cri.go:89] found id: ""
	I0812 11:44:55.695701   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.695710   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:55.695715   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:55.695763   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:55.728719   57198 cri.go:89] found id: ""
	I0812 11:44:55.728748   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.728758   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:55.728763   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:55.728817   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:55.762749   57198 cri.go:89] found id: ""
	I0812 11:44:55.762771   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.762779   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:55.762784   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:55.762842   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:55.796977   57198 cri.go:89] found id: ""
	I0812 11:44:55.797002   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.797013   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:55.797020   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:55.797072   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:55.833992   57198 cri.go:89] found id: ""
	I0812 11:44:55.834038   57198 logs.go:276] 0 containers: []
	W0812 11:44:55.834048   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:55.834057   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:55.834072   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:55.903414   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:55.903440   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:55.903457   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:55.985506   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:55.985547   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:44:56.027380   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:56.027408   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:56.081003   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:56.081039   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:58.595233   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:44:58.608273   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:44:58.608344   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:44:58.648158   57198 cri.go:89] found id: ""
	I0812 11:44:58.648190   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.648201   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:44:58.648208   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:44:58.648274   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:44:58.685920   57198 cri.go:89] found id: ""
	I0812 11:44:58.685950   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.685965   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:44:58.685971   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:44:58.686034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:44:58.718798   57198 cri.go:89] found id: ""
	I0812 11:44:58.718829   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.718841   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:44:58.718847   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:44:58.718901   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:44:58.756410   57198 cri.go:89] found id: ""
	I0812 11:44:58.756437   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.756450   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:44:58.756455   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:44:58.756510   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:44:58.790248   57198 cri.go:89] found id: ""
	I0812 11:44:58.790275   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.790286   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:44:58.790294   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:44:58.790358   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:44:58.827509   57198 cri.go:89] found id: ""
	I0812 11:44:58.827540   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.827550   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:44:58.827557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:44:58.827607   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:44:58.861284   57198 cri.go:89] found id: ""
	I0812 11:44:58.861316   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.861325   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:44:58.861331   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:44:58.861388   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:44:58.894417   57198 cri.go:89] found id: ""
	I0812 11:44:58.894447   57198 logs.go:276] 0 containers: []
	W0812 11:44:58.894455   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:44:58.894466   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:44:58.894480   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:44:58.946460   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:44:58.946507   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:44:58.959811   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:44:58.959843   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:44:59.033847   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:44:59.033878   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:44:59.033893   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:44:59.112075   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:44:59.112109   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:01.654240   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:01.667406   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:01.667476   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:01.705516   57198 cri.go:89] found id: ""
	I0812 11:45:01.705544   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.705552   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:01.705558   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:01.705643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:01.741180   57198 cri.go:89] found id: ""
	I0812 11:45:01.741206   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.741214   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:01.741219   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:01.741273   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:01.775567   57198 cri.go:89] found id: ""
	I0812 11:45:01.775597   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.775609   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:01.775615   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:01.775690   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:01.812162   57198 cri.go:89] found id: ""
	I0812 11:45:01.812186   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.812193   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:01.812199   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:01.812250   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:01.845628   57198 cri.go:89] found id: ""
	I0812 11:45:01.845656   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.845664   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:01.845669   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:01.845716   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:01.879879   57198 cri.go:89] found id: ""
	I0812 11:45:01.879910   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.879922   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:01.879930   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:01.879998   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:01.918101   57198 cri.go:89] found id: ""
	I0812 11:45:01.918123   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.918134   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:01.918141   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:01.918197   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:01.953742   57198 cri.go:89] found id: ""
	I0812 11:45:01.953769   57198 logs.go:276] 0 containers: []
	W0812 11:45:01.953777   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:01.953785   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:01.953796   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:02.029665   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:02.029687   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:02.029703   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:02.111823   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:02.111862   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:02.151170   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:02.151201   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:02.202390   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:02.202429   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:04.716898   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:04.732706   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:04.732774   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:04.770508   57198 cri.go:89] found id: ""
	I0812 11:45:04.770533   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.770542   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:04.770548   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:04.770606   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:04.811904   57198 cri.go:89] found id: ""
	I0812 11:45:04.811930   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.811938   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:04.811943   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:04.811997   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:04.844992   57198 cri.go:89] found id: ""
	I0812 11:45:04.845032   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.845043   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:04.845048   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:04.845109   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:04.878487   57198 cri.go:89] found id: ""
	I0812 11:45:04.878526   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.878534   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:04.878540   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:04.878591   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:04.911109   57198 cri.go:89] found id: ""
	I0812 11:45:04.911139   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.911150   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:04.911157   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:04.911220   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:04.944979   57198 cri.go:89] found id: ""
	I0812 11:45:04.945004   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.945012   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:04.945018   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:04.945075   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:04.978883   57198 cri.go:89] found id: ""
	I0812 11:45:04.978911   57198 logs.go:276] 0 containers: []
	W0812 11:45:04.978919   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:04.978924   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:04.978970   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:05.012106   57198 cri.go:89] found id: ""
	I0812 11:45:05.012135   57198 logs.go:276] 0 containers: []
	W0812 11:45:05.012144   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:05.012152   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:05.012163   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:05.084706   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:05.084729   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:05.084742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:05.161500   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:05.161542   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:05.198936   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:05.198966   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:05.249020   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:05.249058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:07.762237   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:07.774724   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:07.774797   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:07.807708   57198 cri.go:89] found id: ""
	I0812 11:45:07.807742   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.807753   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:07.807762   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:07.807821   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:07.842279   57198 cri.go:89] found id: ""
	I0812 11:45:07.842308   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.842319   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:07.842340   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:07.842403   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:07.875136   57198 cri.go:89] found id: ""
	I0812 11:45:07.875178   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.875189   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:07.875196   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:07.875261   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:07.910295   57198 cri.go:89] found id: ""
	I0812 11:45:07.910327   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.910338   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:07.910345   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:07.910418   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:07.948244   57198 cri.go:89] found id: ""
	I0812 11:45:07.948270   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.948280   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:07.948288   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:07.948351   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:07.981960   57198 cri.go:89] found id: ""
	I0812 11:45:07.981984   57198 logs.go:276] 0 containers: []
	W0812 11:45:07.981992   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:07.981998   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:07.982053   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:08.015764   57198 cri.go:89] found id: ""
	I0812 11:45:08.015798   57198 logs.go:276] 0 containers: []
	W0812 11:45:08.015810   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:08.015817   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:08.015879   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:08.051122   57198 cri.go:89] found id: ""
	I0812 11:45:08.051156   57198 logs.go:276] 0 containers: []
	W0812 11:45:08.051167   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:08.051178   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:08.051192   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:08.102683   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:08.102722   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:08.116145   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:08.116175   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:08.187587   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:08.187612   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:08.187627   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:08.263009   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:08.263046   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:10.801518   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:10.814256   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:10.814321   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:10.846738   57198 cri.go:89] found id: ""
	I0812 11:45:10.846764   57198 logs.go:276] 0 containers: []
	W0812 11:45:10.846772   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:10.846779   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:10.846842   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:10.883703   57198 cri.go:89] found id: ""
	I0812 11:45:10.883728   57198 logs.go:276] 0 containers: []
	W0812 11:45:10.883736   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:10.883741   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:10.883790   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:10.926326   57198 cri.go:89] found id: ""
	I0812 11:45:10.926359   57198 logs.go:276] 0 containers: []
	W0812 11:45:10.926369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:10.926374   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:10.926424   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:10.964668   57198 cri.go:89] found id: ""
	I0812 11:45:10.964693   57198 logs.go:276] 0 containers: []
	W0812 11:45:10.964702   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:10.964708   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:10.964757   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:11.005440   57198 cri.go:89] found id: ""
	I0812 11:45:11.005475   57198 logs.go:276] 0 containers: []
	W0812 11:45:11.005484   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:11.005490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:11.005564   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:11.039041   57198 cri.go:89] found id: ""
	I0812 11:45:11.039070   57198 logs.go:276] 0 containers: []
	W0812 11:45:11.039078   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:11.039089   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:11.039146   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:11.075927   57198 cri.go:89] found id: ""
	I0812 11:45:11.075963   57198 logs.go:276] 0 containers: []
	W0812 11:45:11.075975   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:11.075983   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:11.076059   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:11.110810   57198 cri.go:89] found id: ""
	I0812 11:45:11.110846   57198 logs.go:276] 0 containers: []
	W0812 11:45:11.110856   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:11.110867   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:11.110881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:11.199464   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:11.199509   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:11.235242   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:11.235277   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:11.289781   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:11.289830   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:11.304513   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:11.304549   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:11.369104   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:13.870111   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:13.884696   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:13.884763   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:13.923834   57198 cri.go:89] found id: ""
	I0812 11:45:13.923864   57198 logs.go:276] 0 containers: []
	W0812 11:45:13.923872   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:13.923878   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:13.923937   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:13.958657   57198 cri.go:89] found id: ""
	I0812 11:45:13.958686   57198 logs.go:276] 0 containers: []
	W0812 11:45:13.958694   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:13.958699   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:13.958761   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:13.995357   57198 cri.go:89] found id: ""
	I0812 11:45:13.995389   57198 logs.go:276] 0 containers: []
	W0812 11:45:13.995399   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:13.995404   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:13.995453   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:14.032949   57198 cri.go:89] found id: ""
	I0812 11:45:14.032974   57198 logs.go:276] 0 containers: []
	W0812 11:45:14.032982   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:14.032989   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:14.033046   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:14.067729   57198 cri.go:89] found id: ""
	I0812 11:45:14.067753   57198 logs.go:276] 0 containers: []
	W0812 11:45:14.067761   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:14.067767   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:14.067820   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:14.101971   57198 cri.go:89] found id: ""
	I0812 11:45:14.102005   57198 logs.go:276] 0 containers: []
	W0812 11:45:14.102013   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:14.102019   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:14.102079   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:14.139492   57198 cri.go:89] found id: ""
	I0812 11:45:14.139519   57198 logs.go:276] 0 containers: []
	W0812 11:45:14.139530   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:14.139537   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:14.139604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:14.173014   57198 cri.go:89] found id: ""
	I0812 11:45:14.173042   57198 logs.go:276] 0 containers: []
	W0812 11:45:14.173050   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:14.173058   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:14.173070   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:14.252187   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:14.252239   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:14.290860   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:14.290895   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:14.344687   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:14.344725   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:14.357938   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:14.357973   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:14.433740   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:16.934712   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:16.948530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:16.948592   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:16.987531   57198 cri.go:89] found id: ""
	I0812 11:45:16.987561   57198 logs.go:276] 0 containers: []
	W0812 11:45:16.987570   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:16.987575   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:16.987639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:17.023257   57198 cri.go:89] found id: ""
	I0812 11:45:17.023286   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.023303   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:17.023309   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:17.023369   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:17.058148   57198 cri.go:89] found id: ""
	I0812 11:45:17.058180   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.058192   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:17.058200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:17.058249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:17.092719   57198 cri.go:89] found id: ""
	I0812 11:45:17.092746   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.092754   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:17.092759   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:17.092806   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:17.126708   57198 cri.go:89] found id: ""
	I0812 11:45:17.126737   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.126745   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:17.126751   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:17.126800   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:17.160115   57198 cri.go:89] found id: ""
	I0812 11:45:17.160150   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.160161   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:17.160169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:17.160233   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:17.193394   57198 cri.go:89] found id: ""
	I0812 11:45:17.193427   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.193436   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:17.193441   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:17.193486   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:17.226271   57198 cri.go:89] found id: ""
	I0812 11:45:17.226306   57198 logs.go:276] 0 containers: []
	W0812 11:45:17.226319   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:17.226331   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:17.226347   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:17.301415   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:17.301449   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:17.337563   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:17.337592   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:17.388118   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:17.388158   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:17.401466   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:17.401497   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:17.476439   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:19.977378   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:19.990370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:19.990431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:20.025241   57198 cri.go:89] found id: ""
	I0812 11:45:20.025268   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.025281   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:20.025288   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:20.025340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:20.057281   57198 cri.go:89] found id: ""
	I0812 11:45:20.057314   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.057326   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:20.057334   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:20.057398   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:20.094529   57198 cri.go:89] found id: ""
	I0812 11:45:20.094558   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.094567   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:20.094572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:20.094630   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:20.130840   57198 cri.go:89] found id: ""
	I0812 11:45:20.130872   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.130884   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:20.130891   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:20.130952   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:20.167551   57198 cri.go:89] found id: ""
	I0812 11:45:20.167586   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.167597   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:20.167605   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:20.167678   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:20.205583   57198 cri.go:89] found id: ""
	I0812 11:45:20.205614   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.205636   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:20.205643   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:20.205709   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:20.239367   57198 cri.go:89] found id: ""
	I0812 11:45:20.239400   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.239412   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:20.239420   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:20.239481   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:20.275452   57198 cri.go:89] found id: ""
	I0812 11:45:20.275484   57198 logs.go:276] 0 containers: []
	W0812 11:45:20.275494   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:20.275505   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:20.275525   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:20.314632   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:20.314659   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:20.366709   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:20.366747   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:20.380013   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:20.380045   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:20.449494   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:20.449513   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:20.449532   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:23.027572   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:23.041344   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:23.041421   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:23.079059   57198 cri.go:89] found id: ""
	I0812 11:45:23.079090   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.079101   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:23.079109   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:23.079173   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:23.112299   57198 cri.go:89] found id: ""
	I0812 11:45:23.112332   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.112343   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:23.112350   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:23.112417   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:23.145157   57198 cri.go:89] found id: ""
	I0812 11:45:23.145186   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.145194   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:23.145199   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:23.145257   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:23.179699   57198 cri.go:89] found id: ""
	I0812 11:45:23.179733   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.179752   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:23.179759   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:23.179820   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:23.214180   57198 cri.go:89] found id: ""
	I0812 11:45:23.214212   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.214220   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:23.214226   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:23.214278   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:23.252414   57198 cri.go:89] found id: ""
	I0812 11:45:23.252443   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.252454   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:23.252462   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:23.252524   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:23.284920   57198 cri.go:89] found id: ""
	I0812 11:45:23.284948   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.284957   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:23.284964   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:23.285025   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:23.331224   57198 cri.go:89] found id: ""
	I0812 11:45:23.331255   57198 logs.go:276] 0 containers: []
	W0812 11:45:23.331266   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:23.331276   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:23.331291   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:23.387841   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:23.387879   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:23.403732   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:23.403758   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:23.474688   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:23.474716   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:23.474728   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:23.549398   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:23.549440   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:26.089124   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:26.102128   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:26.102183   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:26.135583   57198 cri.go:89] found id: ""
	I0812 11:45:26.135617   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.135628   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:26.135637   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:26.135699   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:26.169156   57198 cri.go:89] found id: ""
	I0812 11:45:26.169198   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.169210   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:26.169217   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:26.169280   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:26.203889   57198 cri.go:89] found id: ""
	I0812 11:45:26.203918   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.203925   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:26.203931   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:26.203978   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:26.237969   57198 cri.go:89] found id: ""
	I0812 11:45:26.238003   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.238015   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:26.238023   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:26.238087   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:26.270708   57198 cri.go:89] found id: ""
	I0812 11:45:26.270732   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.270739   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:26.270745   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:26.270796   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:26.307535   57198 cri.go:89] found id: ""
	I0812 11:45:26.307559   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.307570   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:26.307577   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:26.307636   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:26.342612   57198 cri.go:89] found id: ""
	I0812 11:45:26.342641   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.342649   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:26.342654   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:26.342702   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:26.381819   57198 cri.go:89] found id: ""
	I0812 11:45:26.381853   57198 logs.go:276] 0 containers: []
	W0812 11:45:26.381863   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:26.381874   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:26.381890   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:26.434324   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:26.434359   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:26.447236   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:26.447263   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:26.508570   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:26.508600   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:26.508614   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:26.582007   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:26.582043   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:29.120342   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:29.140162   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:29.140222   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:29.174726   57198 cri.go:89] found id: ""
	I0812 11:45:29.174756   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.174764   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:29.174769   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:29.174818   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:29.214284   57198 cri.go:89] found id: ""
	I0812 11:45:29.214318   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.214329   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:29.214336   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:29.214403   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:29.250747   57198 cri.go:89] found id: ""
	I0812 11:45:29.250780   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.250790   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:29.250797   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:29.250874   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:29.287212   57198 cri.go:89] found id: ""
	I0812 11:45:29.287247   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.287263   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:29.287271   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:29.287329   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:29.322981   57198 cri.go:89] found id: ""
	I0812 11:45:29.323015   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.323026   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:29.323033   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:29.323097   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:29.357543   57198 cri.go:89] found id: ""
	I0812 11:45:29.357575   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.357584   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:29.357591   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:29.357643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:29.390847   57198 cri.go:89] found id: ""
	I0812 11:45:29.390885   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.390894   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:29.390900   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:29.390950   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:29.424258   57198 cri.go:89] found id: ""
	I0812 11:45:29.424286   57198 logs.go:276] 0 containers: []
	W0812 11:45:29.424297   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:29.424308   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:29.424322   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:29.461528   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:29.461565   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:29.511052   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:29.511091   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:29.525925   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:29.525954   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:29.591452   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:29.591482   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:29.591497   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:32.173633   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:32.186843   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:32.186925   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:32.221165   57198 cri.go:89] found id: ""
	I0812 11:45:32.221206   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.221218   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:32.221226   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:32.221297   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:32.259613   57198 cri.go:89] found id: ""
	I0812 11:45:32.259647   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.259660   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:32.259668   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:32.259733   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:32.293032   57198 cri.go:89] found id: ""
	I0812 11:45:32.293066   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.293073   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:32.293079   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:32.293126   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:32.327907   57198 cri.go:89] found id: ""
	I0812 11:45:32.327935   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.327946   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:32.327953   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:32.328015   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:32.372777   57198 cri.go:89] found id: ""
	I0812 11:45:32.372805   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.372816   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:32.372823   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:32.372909   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:32.411139   57198 cri.go:89] found id: ""
	I0812 11:45:32.411163   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.411171   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:32.411177   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:32.411231   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:32.445814   57198 cri.go:89] found id: ""
	I0812 11:45:32.445844   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.445857   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:32.445864   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:32.445929   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:32.482062   57198 cri.go:89] found id: ""
	I0812 11:45:32.482092   57198 logs.go:276] 0 containers: []
	W0812 11:45:32.482100   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:32.482108   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:32.482128   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:32.531276   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:32.531312   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:32.545161   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:32.545190   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:32.617409   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:32.617429   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:32.617443   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:32.696646   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:32.696693   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:35.232363   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:35.245273   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:35.245335   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:35.297506   57198 cri.go:89] found id: ""
	I0812 11:45:35.297529   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.297537   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:35.297542   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:35.297606   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:35.333125   57198 cri.go:89] found id: ""
	I0812 11:45:35.333158   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.333170   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:35.333177   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:35.333228   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:35.365618   57198 cri.go:89] found id: ""
	I0812 11:45:35.365647   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.365658   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:35.365665   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:35.365727   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:35.403550   57198 cri.go:89] found id: ""
	I0812 11:45:35.403584   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.403596   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:35.403603   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:35.403673   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:35.440448   57198 cri.go:89] found id: ""
	I0812 11:45:35.440480   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.440491   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:35.440497   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:35.440557   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:35.473579   57198 cri.go:89] found id: ""
	I0812 11:45:35.473604   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.473615   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:35.473623   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:35.473692   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:35.511098   57198 cri.go:89] found id: ""
	I0812 11:45:35.511131   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.511141   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:35.511148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:35.511220   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:35.544764   57198 cri.go:89] found id: ""
	I0812 11:45:35.544795   57198 logs.go:276] 0 containers: []
	W0812 11:45:35.544806   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:35.544815   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:35.544828   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:35.593415   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:35.593452   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:35.606406   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:35.606439   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:35.671816   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:35.671842   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:35.671857   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:35.751536   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:35.751578   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:38.295443   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:38.309816   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:38.309875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:38.347739   57198 cri.go:89] found id: ""
	I0812 11:45:38.347772   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.347783   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:38.347791   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:38.347851   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:38.383682   57198 cri.go:89] found id: ""
	I0812 11:45:38.383706   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.383714   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:38.383720   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:38.383770   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:38.418909   57198 cri.go:89] found id: ""
	I0812 11:45:38.418945   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.418956   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:38.418963   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:38.419027   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:38.455028   57198 cri.go:89] found id: ""
	I0812 11:45:38.455066   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.455076   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:38.455082   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:38.455131   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:38.487853   57198 cri.go:89] found id: ""
	I0812 11:45:38.487890   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.487901   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:38.487908   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:38.487969   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:38.520194   57198 cri.go:89] found id: ""
	I0812 11:45:38.520229   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.520241   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:38.520248   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:38.520307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:38.558679   57198 cri.go:89] found id: ""
	I0812 11:45:38.558709   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.558719   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:38.558726   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:38.558791   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:38.596478   57198 cri.go:89] found id: ""
	I0812 11:45:38.596512   57198 logs.go:276] 0 containers: []
	W0812 11:45:38.596525   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:38.596537   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:38.596557   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:38.682709   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:38.682746   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:38.724633   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:38.724663   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:38.774084   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:38.774121   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:38.787343   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:38.787373   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:38.861717   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:41.362704   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:41.377456   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:41.377524   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:41.415778   57198 cri.go:89] found id: ""
	I0812 11:45:41.415807   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.415815   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:41.415821   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:41.415866   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:41.458849   57198 cri.go:89] found id: ""
	I0812 11:45:41.458874   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.458882   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:41.458887   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:41.458941   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:41.500829   57198 cri.go:89] found id: ""
	I0812 11:45:41.500856   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.500875   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:41.500883   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:41.500938   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:41.534367   57198 cri.go:89] found id: ""
	I0812 11:45:41.534402   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.534411   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:41.534416   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:41.534469   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:41.572394   57198 cri.go:89] found id: ""
	I0812 11:45:41.572431   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.572440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:41.572446   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:41.572502   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:41.606303   57198 cri.go:89] found id: ""
	I0812 11:45:41.606336   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.606347   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:41.606354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:41.606417   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:41.639587   57198 cri.go:89] found id: ""
	I0812 11:45:41.639634   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.639643   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:41.639652   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:41.639716   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:41.673382   57198 cri.go:89] found id: ""
	I0812 11:45:41.673425   57198 logs.go:276] 0 containers: []
	W0812 11:45:41.673434   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:41.673442   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:41.673453   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:41.726812   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:41.726850   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:41.740853   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:41.740902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:41.822917   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:41.822946   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:41.822962   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:41.904344   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:41.904386   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:44.444430   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:44.457556   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:44.457638   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:44.490879   57198 cri.go:89] found id: ""
	I0812 11:45:44.490934   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.490947   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:44.490955   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:44.491021   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:44.524185   57198 cri.go:89] found id: ""
	I0812 11:45:44.524218   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.524229   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:44.524238   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:44.524305   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:44.562370   57198 cri.go:89] found id: ""
	I0812 11:45:44.562406   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.562416   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:44.562438   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:44.562504   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:44.601652   57198 cri.go:89] found id: ""
	I0812 11:45:44.601683   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.601692   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:44.601699   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:44.601756   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:44.646945   57198 cri.go:89] found id: ""
	I0812 11:45:44.646975   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.646984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:44.646990   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:44.647045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:44.683840   57198 cri.go:89] found id: ""
	I0812 11:45:44.683866   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.683876   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:44.683887   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:44.683947   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:44.720478   57198 cri.go:89] found id: ""
	I0812 11:45:44.720509   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.720521   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:44.720529   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:44.720593   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:44.757944   57198 cri.go:89] found id: ""
	I0812 11:45:44.757973   57198 logs.go:276] 0 containers: []
	W0812 11:45:44.757981   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:44.758004   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:44.758017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:44.812816   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:44.812859   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:44.826766   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:44.826794   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:44.902512   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:44.902530   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:44.902541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:44.978854   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:44.978895   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:47.517544   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:47.531810   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:47.531887   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:47.568742   57198 cri.go:89] found id: ""
	I0812 11:45:47.568770   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.568780   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:47.568788   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:47.568879   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:47.602503   57198 cri.go:89] found id: ""
	I0812 11:45:47.602534   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.602545   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:47.602552   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:47.602613   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:47.637216   57198 cri.go:89] found id: ""
	I0812 11:45:47.637245   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.637254   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:47.637261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:47.637313   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:47.672011   57198 cri.go:89] found id: ""
	I0812 11:45:47.672041   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.672052   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:47.672060   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:47.672143   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:47.706367   57198 cri.go:89] found id: ""
	I0812 11:45:47.706408   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.706418   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:47.706425   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:47.706483   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:47.740781   57198 cri.go:89] found id: ""
	I0812 11:45:47.740812   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.740823   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:47.740831   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:47.740910   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:47.773997   57198 cri.go:89] found id: ""
	I0812 11:45:47.774028   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.774036   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:47.774042   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:47.774088   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:47.808499   57198 cri.go:89] found id: ""
	I0812 11:45:47.808530   57198 logs.go:276] 0 containers: []
	W0812 11:45:47.808590   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:47.808605   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:47.808618   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:47.847694   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:47.847720   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:47.899887   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:47.899922   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:47.913537   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:47.913574   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:47.987044   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:47.987063   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:47.987075   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:50.562084   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:50.577365   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:50.577445   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:50.611715   57198 cri.go:89] found id: ""
	I0812 11:45:50.611751   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.611762   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:50.611770   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:50.611835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:50.644614   57198 cri.go:89] found id: ""
	I0812 11:45:50.644647   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.644659   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:50.644666   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:50.644724   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:50.681969   57198 cri.go:89] found id: ""
	I0812 11:45:50.681996   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.682005   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:50.682013   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:50.682074   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:50.715774   57198 cri.go:89] found id: ""
	I0812 11:45:50.715805   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.715816   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:50.715823   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:50.715886   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:50.752261   57198 cri.go:89] found id: ""
	I0812 11:45:50.752288   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.752295   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:50.752300   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:50.752360   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:50.787417   57198 cri.go:89] found id: ""
	I0812 11:45:50.787445   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.787456   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:50.787464   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:50.787532   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:50.824479   57198 cri.go:89] found id: ""
	I0812 11:45:50.824506   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.824517   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:50.824525   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:50.824580   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:50.861039   57198 cri.go:89] found id: ""
	I0812 11:45:50.861067   57198 logs.go:276] 0 containers: []
	W0812 11:45:50.861075   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:50.861083   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:50.861094   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:50.936652   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:50.936685   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:50.936704   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:51.016994   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:51.017032   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:51.054019   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:51.054050   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:51.102652   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:51.102686   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:53.616804   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:53.630452   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:53.630522   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:53.665330   57198 cri.go:89] found id: ""
	I0812 11:45:53.665360   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.665372   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:53.665379   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:53.665443   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:53.701797   57198 cri.go:89] found id: ""
	I0812 11:45:53.701824   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.701834   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:53.701839   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:53.701908   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:53.736535   57198 cri.go:89] found id: ""
	I0812 11:45:53.736572   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.736596   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:53.736612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:53.736688   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:53.773741   57198 cri.go:89] found id: ""
	I0812 11:45:53.773816   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.773826   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:53.773833   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:53.773885   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:53.810580   57198 cri.go:89] found id: ""
	I0812 11:45:53.810607   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.810618   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:53.810624   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:53.810687   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:53.844825   57198 cri.go:89] found id: ""
	I0812 11:45:53.844852   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.844884   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:53.844893   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:53.844956   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:53.878718   57198 cri.go:89] found id: ""
	I0812 11:45:53.878753   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.878773   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:53.878781   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:53.878841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:53.916015   57198 cri.go:89] found id: ""
	I0812 11:45:53.916043   57198 logs.go:276] 0 containers: []
	W0812 11:45:53.916053   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:53.916063   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:53.916082   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:53.954802   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:53.954832   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:54.003532   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:54.003568   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:54.017949   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:54.017976   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:54.084799   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:54.084824   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:54.084838   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:56.663328   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:56.676741   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:56.676826   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:56.714871   57198 cri.go:89] found id: ""
	I0812 11:45:56.714895   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.714904   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:56.714911   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:56.714961   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:56.749716   57198 cri.go:89] found id: ""
	I0812 11:45:56.749745   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.749756   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:56.749763   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:56.749833   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:56.782786   57198 cri.go:89] found id: ""
	I0812 11:45:56.782818   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.782828   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:56.782835   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:56.782897   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:56.819428   57198 cri.go:89] found id: ""
	I0812 11:45:56.819459   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.819469   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:56.819475   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:56.819525   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:56.852596   57198 cri.go:89] found id: ""
	I0812 11:45:56.852620   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.852628   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:56.852634   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:56.852687   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:56.889040   57198 cri.go:89] found id: ""
	I0812 11:45:56.889066   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.889073   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:56.889079   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:56.889127   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:56.927937   57198 cri.go:89] found id: ""
	I0812 11:45:56.927974   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.927986   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:56.927993   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:56.928066   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:45:56.962745   57198 cri.go:89] found id: ""
	I0812 11:45:56.962776   57198 logs.go:276] 0 containers: []
	W0812 11:45:56.962787   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:45:56.962798   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:45:56.962811   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:45:57.014186   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:45:57.014216   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:45:57.027978   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:45:57.028008   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:45:57.094479   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:45:57.094497   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:45:57.094513   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:45:57.172002   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:45:57.172049   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:45:59.711092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:45:59.724486   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:45:59.724569   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:45:59.764640   57198 cri.go:89] found id: ""
	I0812 11:45:59.764668   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.764678   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:45:59.764685   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:45:59.764747   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:45:59.799991   57198 cri.go:89] found id: ""
	I0812 11:45:59.800020   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.800030   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:45:59.800037   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:45:59.800098   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:45:59.837265   57198 cri.go:89] found id: ""
	I0812 11:45:59.837293   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.837302   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:45:59.837308   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:45:59.837373   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:45:59.872132   57198 cri.go:89] found id: ""
	I0812 11:45:59.872159   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.872167   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:45:59.872172   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:45:59.872222   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:45:59.907886   57198 cri.go:89] found id: ""
	I0812 11:45:59.907915   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.907926   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:45:59.907934   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:45:59.907998   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:45:59.940997   57198 cri.go:89] found id: ""
	I0812 11:45:59.941035   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.941047   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:45:59.941056   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:45:59.941128   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:45:59.975656   57198 cri.go:89] found id: ""
	I0812 11:45:59.975689   57198 logs.go:276] 0 containers: []
	W0812 11:45:59.975697   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:45:59.975702   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:45:59.975753   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:00.010949   57198 cri.go:89] found id: ""
	I0812 11:46:00.010986   57198 logs.go:276] 0 containers: []
	W0812 11:46:00.010997   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:00.011009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:00.011023   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:00.078203   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:00.078228   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:00.078254   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:00.158093   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:00.158132   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:00.198440   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:00.198471   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:00.248023   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:00.248062   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:02.761965   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:02.775305   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:02.775380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:02.813848   57198 cri.go:89] found id: ""
	I0812 11:46:02.813882   57198 logs.go:276] 0 containers: []
	W0812 11:46:02.813893   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:02.813900   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:02.813961   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:02.848691   57198 cri.go:89] found id: ""
	I0812 11:46:02.848723   57198 logs.go:276] 0 containers: []
	W0812 11:46:02.848732   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:02.848737   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:02.848794   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:02.882062   57198 cri.go:89] found id: ""
	I0812 11:46:02.882094   57198 logs.go:276] 0 containers: []
	W0812 11:46:02.882102   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:02.882108   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:02.882167   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:02.932571   57198 cri.go:89] found id: ""
	I0812 11:46:02.932605   57198 logs.go:276] 0 containers: []
	W0812 11:46:02.932616   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:02.932622   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:02.932683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:02.967551   57198 cri.go:89] found id: ""
	I0812 11:46:02.967581   57198 logs.go:276] 0 containers: []
	W0812 11:46:02.967591   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:02.967599   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:02.967652   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:03.001151   57198 cri.go:89] found id: ""
	I0812 11:46:03.001178   57198 logs.go:276] 0 containers: []
	W0812 11:46:03.001188   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:03.001196   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:03.001257   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:03.037420   57198 cri.go:89] found id: ""
	I0812 11:46:03.037448   57198 logs.go:276] 0 containers: []
	W0812 11:46:03.037457   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:03.037462   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:03.037509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:03.074874   57198 cri.go:89] found id: ""
	I0812 11:46:03.074900   57198 logs.go:276] 0 containers: []
	W0812 11:46:03.074907   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:03.074945   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:03.074960   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:03.123400   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:03.123441   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:03.138043   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:03.138074   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:03.203146   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:03.203168   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:03.203180   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:03.281862   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:03.281898   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:05.819513   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:05.834092   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:05.834180   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:05.869644   57198 cri.go:89] found id: ""
	I0812 11:46:05.869669   57198 logs.go:276] 0 containers: []
	W0812 11:46:05.869680   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:05.869687   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:05.869749   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:05.908050   57198 cri.go:89] found id: ""
	I0812 11:46:05.908079   57198 logs.go:276] 0 containers: []
	W0812 11:46:05.908090   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:05.908097   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:05.908155   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:05.941256   57198 cri.go:89] found id: ""
	I0812 11:46:05.941292   57198 logs.go:276] 0 containers: []
	W0812 11:46:05.941300   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:05.941306   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:05.941372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:05.979229   57198 cri.go:89] found id: ""
	I0812 11:46:05.979260   57198 logs.go:276] 0 containers: []
	W0812 11:46:05.979270   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:05.979276   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:05.979349   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:06.011008   57198 cri.go:89] found id: ""
	I0812 11:46:06.011035   57198 logs.go:276] 0 containers: []
	W0812 11:46:06.011042   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:06.011047   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:06.011099   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:06.048210   57198 cri.go:89] found id: ""
	I0812 11:46:06.048234   57198 logs.go:276] 0 containers: []
	W0812 11:46:06.048246   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:06.048252   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:06.048298   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:06.080379   57198 cri.go:89] found id: ""
	I0812 11:46:06.080417   57198 logs.go:276] 0 containers: []
	W0812 11:46:06.080430   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:06.080439   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:06.080512   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:06.111822   57198 cri.go:89] found id: ""
	I0812 11:46:06.111847   57198 logs.go:276] 0 containers: []
	W0812 11:46:06.111856   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:06.111864   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:06.111874   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:06.161861   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:06.161900   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:06.176095   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:06.176125   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:06.249550   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:06.249573   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:06.249585   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:06.326621   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:06.326661   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:08.866483   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:08.880031   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:08.880102   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:08.917265   57198 cri.go:89] found id: ""
	I0812 11:46:08.917299   57198 logs.go:276] 0 containers: []
	W0812 11:46:08.917310   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:08.917317   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:08.917377   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:08.954401   57198 cri.go:89] found id: ""
	I0812 11:46:08.954431   57198 logs.go:276] 0 containers: []
	W0812 11:46:08.954440   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:08.954445   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:08.954505   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:08.987457   57198 cri.go:89] found id: ""
	I0812 11:46:08.987483   57198 logs.go:276] 0 containers: []
	W0812 11:46:08.987494   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:08.987500   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:08.987563   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:09.021800   57198 cri.go:89] found id: ""
	I0812 11:46:09.021830   57198 logs.go:276] 0 containers: []
	W0812 11:46:09.021837   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:09.021843   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:09.021906   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:09.055945   57198 cri.go:89] found id: ""
	I0812 11:46:09.055970   57198 logs.go:276] 0 containers: []
	W0812 11:46:09.055978   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:09.055983   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:09.056036   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:09.089049   57198 cri.go:89] found id: ""
	I0812 11:46:09.089077   57198 logs.go:276] 0 containers: []
	W0812 11:46:09.089087   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:09.089095   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:09.089158   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:09.121957   57198 cri.go:89] found id: ""
	I0812 11:46:09.121991   57198 logs.go:276] 0 containers: []
	W0812 11:46:09.122003   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:09.122009   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:09.122078   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:09.154048   57198 cri.go:89] found id: ""
	I0812 11:46:09.154077   57198 logs.go:276] 0 containers: []
	W0812 11:46:09.154087   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:09.154098   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:09.154114   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:09.210510   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:09.210554   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:09.224048   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:09.224078   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:09.293808   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:09.293832   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:09.293850   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:09.378189   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:09.378229   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:11.926083   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:11.939304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:11.939371   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:11.974772   57198 cri.go:89] found id: ""
	I0812 11:46:11.974798   57198 logs.go:276] 0 containers: []
	W0812 11:46:11.974807   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:11.974812   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:11.974883   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:12.008566   57198 cri.go:89] found id: ""
	I0812 11:46:12.008598   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.008609   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:12.008617   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:12.008680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:12.043206   57198 cri.go:89] found id: ""
	I0812 11:46:12.043234   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.043241   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:12.043246   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:12.043294   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:12.081119   57198 cri.go:89] found id: ""
	I0812 11:46:12.081146   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.081153   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:12.081163   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:12.081231   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:12.115911   57198 cri.go:89] found id: ""
	I0812 11:46:12.115936   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.115944   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:12.115949   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:12.116004   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:12.149820   57198 cri.go:89] found id: ""
	I0812 11:46:12.149850   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.149859   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:12.149866   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:12.149930   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:12.182456   57198 cri.go:89] found id: ""
	I0812 11:46:12.182486   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.182495   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:12.182501   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:12.182578   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:12.217380   57198 cri.go:89] found id: ""
	I0812 11:46:12.217409   57198 logs.go:276] 0 containers: []
	W0812 11:46:12.217418   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:12.217426   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:12.217440   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:12.267898   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:12.267931   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:12.280676   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:12.280705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:12.347463   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:12.347487   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:12.347501   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:12.428270   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:12.428310   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:14.968372   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:14.981648   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:14.981713   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:15.018372   57198 cri.go:89] found id: ""
	I0812 11:46:15.018398   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.018407   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:15.018412   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:15.018461   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:15.057503   57198 cri.go:89] found id: ""
	I0812 11:46:15.057534   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.057546   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:15.057553   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:15.057612   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:15.106141   57198 cri.go:89] found id: ""
	I0812 11:46:15.106173   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.106184   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:15.106191   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:15.106253   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:15.168076   57198 cri.go:89] found id: ""
	I0812 11:46:15.168100   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.168110   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:15.168117   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:15.168183   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:15.202315   57198 cri.go:89] found id: ""
	I0812 11:46:15.202346   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.202354   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:15.202360   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:15.202413   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:15.237227   57198 cri.go:89] found id: ""
	I0812 11:46:15.237252   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.237259   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:15.237271   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:15.237355   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:15.273782   57198 cri.go:89] found id: ""
	I0812 11:46:15.273812   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.273821   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:15.273829   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:15.273897   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:15.308155   57198 cri.go:89] found id: ""
	I0812 11:46:15.308186   57198 logs.go:276] 0 containers: []
	W0812 11:46:15.308195   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:15.308204   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:15.308216   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:15.359935   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:15.359976   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:15.373282   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:15.373349   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:15.444142   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:15.444240   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:15.444260   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:15.520543   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:15.520584   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:18.056668   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:18.070524   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:18.070582   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:18.104061   57198 cri.go:89] found id: ""
	I0812 11:46:18.104091   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.104102   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:18.104110   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:18.104170   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:18.142671   57198 cri.go:89] found id: ""
	I0812 11:46:18.142701   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.142712   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:18.142719   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:18.142783   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:18.177920   57198 cri.go:89] found id: ""
	I0812 11:46:18.177952   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.177962   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:18.177968   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:18.178032   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:18.214620   57198 cri.go:89] found id: ""
	I0812 11:46:18.214655   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.214667   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:18.214675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:18.214738   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:18.247800   57198 cri.go:89] found id: ""
	I0812 11:46:18.247827   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.247836   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:18.247844   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:18.247909   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:18.281625   57198 cri.go:89] found id: ""
	I0812 11:46:18.281653   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.281661   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:18.281667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:18.281734   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:18.317049   57198 cri.go:89] found id: ""
	I0812 11:46:18.317075   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.317082   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:18.317088   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:18.317149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:18.349815   57198 cri.go:89] found id: ""
	I0812 11:46:18.349842   57198 logs.go:276] 0 containers: []
	W0812 11:46:18.349852   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:18.349861   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:18.349877   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:18.363074   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:18.363108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:18.425732   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:18.425756   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:18.425772   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:18.501444   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:18.501481   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:18.539721   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:18.539759   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:21.096696   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:21.109415   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:21.109482   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:21.146181   57198 cri.go:89] found id: ""
	I0812 11:46:21.146213   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.146224   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:21.146231   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:21.146294   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:21.179794   57198 cri.go:89] found id: ""
	I0812 11:46:21.179821   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.179832   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:21.179840   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:21.179899   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:21.212370   57198 cri.go:89] found id: ""
	I0812 11:46:21.212402   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.212411   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:21.212417   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:21.212482   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:21.244999   57198 cri.go:89] found id: ""
	I0812 11:46:21.245029   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.245040   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:21.245048   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:21.245111   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:21.279498   57198 cri.go:89] found id: ""
	I0812 11:46:21.279527   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.279538   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:21.279545   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:21.279613   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:21.313916   57198 cri.go:89] found id: ""
	I0812 11:46:21.313941   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.313950   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:21.313956   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:21.314003   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:21.347664   57198 cri.go:89] found id: ""
	I0812 11:46:21.347701   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.347712   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:21.347719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:21.347775   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:21.382503   57198 cri.go:89] found id: ""
	I0812 11:46:21.382530   57198 logs.go:276] 0 containers: []
	W0812 11:46:21.382540   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:21.382551   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:21.382564   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:21.434105   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:21.434139   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:21.448254   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:21.448283   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:21.523382   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:21.523407   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:21.523422   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:21.607094   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:21.607140   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:24.144536   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:24.157287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:24.157358   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:24.202947   57198 cri.go:89] found id: ""
	I0812 11:46:24.202973   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.202982   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:24.202988   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:24.203047   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:24.236471   57198 cri.go:89] found id: ""
	I0812 11:46:24.236501   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.236511   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:24.236518   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:24.236579   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:24.270777   57198 cri.go:89] found id: ""
	I0812 11:46:24.270803   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.270813   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:24.270821   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:24.270880   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:24.305503   57198 cri.go:89] found id: ""
	I0812 11:46:24.305529   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.305536   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:24.305542   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:24.305589   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:24.339323   57198 cri.go:89] found id: ""
	I0812 11:46:24.339349   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.339360   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:24.339367   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:24.339443   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:24.371265   57198 cri.go:89] found id: ""
	I0812 11:46:24.371298   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.371309   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:24.371317   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:24.371396   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:24.404037   57198 cri.go:89] found id: ""
	I0812 11:46:24.404075   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.404086   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:24.404093   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:24.404154   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:24.436832   57198 cri.go:89] found id: ""
	I0812 11:46:24.436855   57198 logs.go:276] 0 containers: []
	W0812 11:46:24.436888   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:24.436900   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:24.436915   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:24.489029   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:24.489070   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:24.503937   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:24.503968   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:24.580320   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:24.580350   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:24.580366   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:24.661160   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:24.661198   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:27.204353   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:27.217900   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:27.217964   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:27.254751   57198 cri.go:89] found id: ""
	I0812 11:46:27.254792   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.254803   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:27.254812   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:27.254879   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:27.292172   57198 cri.go:89] found id: ""
	I0812 11:46:27.292204   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.292213   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:27.292219   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:27.292286   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:27.331158   57198 cri.go:89] found id: ""
	I0812 11:46:27.331187   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.331197   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:27.331205   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:27.331272   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:27.366515   57198 cri.go:89] found id: ""
	I0812 11:46:27.366539   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.366546   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:27.366553   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:27.366601   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:27.400413   57198 cri.go:89] found id: ""
	I0812 11:46:27.400441   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.400452   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:27.400460   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:27.400526   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:27.435504   57198 cri.go:89] found id: ""
	I0812 11:46:27.435533   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.435542   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:27.435547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:27.435605   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:27.474125   57198 cri.go:89] found id: ""
	I0812 11:46:27.474156   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.474166   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:27.474173   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:27.474237   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:27.516805   57198 cri.go:89] found id: ""
	I0812 11:46:27.516841   57198 logs.go:276] 0 containers: []
	W0812 11:46:27.516852   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:27.516881   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:27.516897   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:27.567237   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:27.567274   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:27.580992   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:27.581030   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:27.649304   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:27.649334   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:27.649347   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:27.731567   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:27.731605   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:30.277418   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:30.290741   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:30.290823   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:30.326393   57198 cri.go:89] found id: ""
	I0812 11:46:30.326417   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.326425   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:30.326433   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:30.326483   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:30.360205   57198 cri.go:89] found id: ""
	I0812 11:46:30.360234   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.360244   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:30.360253   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:30.360313   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:30.394440   57198 cri.go:89] found id: ""
	I0812 11:46:30.394464   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.394472   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:30.394478   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:30.394535   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:30.429166   57198 cri.go:89] found id: ""
	I0812 11:46:30.429265   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.429275   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:30.429281   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:30.429339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:30.463294   57198 cri.go:89] found id: ""
	I0812 11:46:30.463321   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.463328   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:30.463334   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:30.463391   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:30.496911   57198 cri.go:89] found id: ""
	I0812 11:46:30.496937   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.496948   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:30.496956   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:30.497023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:30.530112   57198 cri.go:89] found id: ""
	I0812 11:46:30.530140   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.530147   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:30.530153   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:30.530205   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:30.566342   57198 cri.go:89] found id: ""
	I0812 11:46:30.566377   57198 logs.go:276] 0 containers: []
	W0812 11:46:30.566389   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:30.566403   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:30.566422   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:30.620727   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:30.620767   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:30.634151   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:30.634178   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:30.701714   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:30.701745   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:30.701760   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:30.779268   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:30.779304   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:33.320799   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:33.334013   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:33.334085   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:33.365991   57198 cri.go:89] found id: ""
	I0812 11:46:33.366025   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.366036   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:33.366043   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:33.366107   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:33.397813   57198 cri.go:89] found id: ""
	I0812 11:46:33.397847   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.397859   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:33.397866   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:33.397925   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:33.431169   57198 cri.go:89] found id: ""
	I0812 11:46:33.431203   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.431221   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:33.431229   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:33.431280   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:33.466278   57198 cri.go:89] found id: ""
	I0812 11:46:33.466325   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.466333   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:33.466339   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:33.466398   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:33.499438   57198 cri.go:89] found id: ""
	I0812 11:46:33.499463   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.499478   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:33.499483   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:33.499532   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:33.541384   57198 cri.go:89] found id: ""
	I0812 11:46:33.541418   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.541428   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:33.541436   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:33.541495   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:33.574274   57198 cri.go:89] found id: ""
	I0812 11:46:33.574311   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.574323   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:33.574332   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:33.574403   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:33.607194   57198 cri.go:89] found id: ""
	I0812 11:46:33.607222   57198 logs.go:276] 0 containers: []
	W0812 11:46:33.607231   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:33.607239   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:33.607250   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:33.657072   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:33.657110   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:33.670685   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:33.670714   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:33.739923   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:33.739947   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:33.739963   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:33.818080   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:33.818117   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:36.357288   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:36.370693   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:36.370753   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:36.405843   57198 cri.go:89] found id: ""
	I0812 11:46:36.405867   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.405875   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:36.405882   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:36.405930   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:36.438394   57198 cri.go:89] found id: ""
	I0812 11:46:36.438418   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.438426   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:36.438432   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:36.438479   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:36.473718   57198 cri.go:89] found id: ""
	I0812 11:46:36.473753   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.473765   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:36.473772   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:36.473832   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:36.506017   57198 cri.go:89] found id: ""
	I0812 11:46:36.506048   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.506060   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:36.506071   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:36.506131   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:36.540712   57198 cri.go:89] found id: ""
	I0812 11:46:36.540742   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.540754   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:36.540760   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:36.540816   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:36.576053   57198 cri.go:89] found id: ""
	I0812 11:46:36.576082   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.576090   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:36.576095   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:36.576144   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:36.609720   57198 cri.go:89] found id: ""
	I0812 11:46:36.609749   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.609761   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:36.609769   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:36.609833   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:36.642140   57198 cri.go:89] found id: ""
	I0812 11:46:36.642176   57198 logs.go:276] 0 containers: []
	W0812 11:46:36.642187   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:36.642198   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:36.642211   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:36.693491   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:36.693522   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:36.706324   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:36.706354   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:36.778613   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:36.778634   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:36.778645   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:36.857033   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:36.857070   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:39.394822   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:39.408623   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:39.408688   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:39.444150   57198 cri.go:89] found id: ""
	I0812 11:46:39.444185   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.444193   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:39.444199   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:39.444247   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:39.478706   57198 cri.go:89] found id: ""
	I0812 11:46:39.478744   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.478754   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:39.478760   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:39.478830   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:39.512469   57198 cri.go:89] found id: ""
	I0812 11:46:39.512495   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.512502   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:39.512507   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:39.512593   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:39.548349   57198 cri.go:89] found id: ""
	I0812 11:46:39.548385   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.548397   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:39.548404   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:39.548481   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:39.585529   57198 cri.go:89] found id: ""
	I0812 11:46:39.585554   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.585563   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:39.585569   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:39.585634   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:39.619197   57198 cri.go:89] found id: ""
	I0812 11:46:39.619221   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.619229   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:39.619234   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:39.619289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:39.653147   57198 cri.go:89] found id: ""
	I0812 11:46:39.653171   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.653179   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:39.653184   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:39.653231   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:39.687479   57198 cri.go:89] found id: ""
	I0812 11:46:39.687512   57198 logs.go:276] 0 containers: []
	W0812 11:46:39.687523   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:39.687533   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:39.687545   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:39.724052   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:39.724086   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:39.774141   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:39.774179   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:39.788568   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:39.788600   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:39.860124   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:39.860146   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:39.860158   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:42.439100   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:42.452553   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:42.452638   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:42.487700   57198 cri.go:89] found id: ""
	I0812 11:46:42.487731   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.487742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:42.487750   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:42.487815   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:42.519506   57198 cri.go:89] found id: ""
	I0812 11:46:42.519535   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.519546   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:42.519553   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:42.519620   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:42.551708   57198 cri.go:89] found id: ""
	I0812 11:46:42.551735   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.551743   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:42.551749   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:42.551798   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:42.584427   57198 cri.go:89] found id: ""
	I0812 11:46:42.584461   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.584473   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:42.584480   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:42.584544   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:42.617350   57198 cri.go:89] found id: ""
	I0812 11:46:42.617390   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.617402   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:42.617410   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:42.617480   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:42.649518   57198 cri.go:89] found id: ""
	I0812 11:46:42.649546   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.649555   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:42.649563   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:42.649636   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:42.682372   57198 cri.go:89] found id: ""
	I0812 11:46:42.682403   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.682414   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:42.682421   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:42.682484   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:42.723558   57198 cri.go:89] found id: ""
	I0812 11:46:42.723586   57198 logs.go:276] 0 containers: []
	W0812 11:46:42.723595   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:42.723603   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:42.723617   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:42.736936   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:42.736968   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:42.805492   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:42.805513   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:42.805526   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:42.880812   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:42.880850   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:42.918033   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:42.918058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:45.472477   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:45.489947   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:45.490010   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:45.522905   57198 cri.go:89] found id: ""
	I0812 11:46:45.522938   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.522948   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:45.522954   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:45.523020   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:45.562505   57198 cri.go:89] found id: ""
	I0812 11:46:45.562534   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.562543   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:45.562549   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:45.562612   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:45.597038   57198 cri.go:89] found id: ""
	I0812 11:46:45.597068   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.597078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:45.597084   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:45.597134   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:45.631721   57198 cri.go:89] found id: ""
	I0812 11:46:45.631752   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.631762   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:45.631771   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:45.631838   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:45.665652   57198 cri.go:89] found id: ""
	I0812 11:46:45.665681   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.665690   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:45.665696   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:45.665747   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:45.704986   57198 cri.go:89] found id: ""
	I0812 11:46:45.705010   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.705018   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:45.705025   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:45.705083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:45.742494   57198 cri.go:89] found id: ""
	I0812 11:46:45.742525   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.742536   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:45.742545   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:45.742610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:45.776197   57198 cri.go:89] found id: ""
	I0812 11:46:45.776230   57198 logs.go:276] 0 containers: []
	W0812 11:46:45.776242   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:45.776254   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:45.776270   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:45.855334   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:45.855352   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:45.855366   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:45.934050   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:45.934097   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:45.970783   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:45.970820   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:46.021304   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:46.021341   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:48.535219   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:48.549538   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:48.549599   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:48.584554   57198 cri.go:89] found id: ""
	I0812 11:46:48.584584   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.584592   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:48.584598   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:48.584650   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:48.617639   57198 cri.go:89] found id: ""
	I0812 11:46:48.617668   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.617675   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:48.617681   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:48.617731   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:48.649610   57198 cri.go:89] found id: ""
	I0812 11:46:48.649639   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.649650   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:48.649656   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:48.649720   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:48.684138   57198 cri.go:89] found id: ""
	I0812 11:46:48.684162   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.684174   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:48.684183   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:48.684242   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:48.718699   57198 cri.go:89] found id: ""
	I0812 11:46:48.718726   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.718736   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:48.718743   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:48.718806   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:48.754782   57198 cri.go:89] found id: ""
	I0812 11:46:48.754805   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.754814   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:48.754820   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:48.754880   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:48.787903   57198 cri.go:89] found id: ""
	I0812 11:46:48.787937   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.787948   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:48.787958   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:48.788026   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:48.823621   57198 cri.go:89] found id: ""
	I0812 11:46:48.823650   57198 logs.go:276] 0 containers: []
	W0812 11:46:48.823662   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:48.823674   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:48.823689   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:48.873443   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:48.873475   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:48.895103   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:48.895136   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:48.964701   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:48.964733   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:48.964750   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:49.041618   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:49.041653   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:51.581845   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:51.594482   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:51.594565   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:51.633776   57198 cri.go:89] found id: ""
	I0812 11:46:51.633800   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.633808   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:51.633814   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:51.633862   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:51.666264   57198 cri.go:89] found id: ""
	I0812 11:46:51.666302   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.666316   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:51.666325   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:51.666389   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:51.701741   57198 cri.go:89] found id: ""
	I0812 11:46:51.701773   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.701781   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:51.701787   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:51.701842   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:51.736194   57198 cri.go:89] found id: ""
	I0812 11:46:51.736225   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.736234   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:51.736241   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:51.736302   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:51.773812   57198 cri.go:89] found id: ""
	I0812 11:46:51.773843   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.773853   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:51.773859   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:51.773921   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:51.808762   57198 cri.go:89] found id: ""
	I0812 11:46:51.808791   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.808799   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:51.808806   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:51.808853   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:51.845268   57198 cri.go:89] found id: ""
	I0812 11:46:51.845296   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.845304   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:51.845309   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:51.845360   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:51.877556   57198 cri.go:89] found id: ""
	I0812 11:46:51.877580   57198 logs.go:276] 0 containers: []
	W0812 11:46:51.877588   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:51.877596   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:51.877606   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:51.927150   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:51.927193   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:51.940260   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:51.940291   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:52.012196   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:52.012226   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:52.012242   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:52.096602   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:52.096642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	* 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	* 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-835962 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (235.530092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-835962 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:46:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:46:59.013199   59908 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:46:59.013476   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013486   59908 out.go:304] Setting ErrFile to fd 2...
	I0812 11:46:59.013490   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013689   59908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:46:59.014204   59908 out.go:298] Setting JSON to false
	I0812 11:46:59.015302   59908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5360,"bootTime":1723457859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:46:59.015368   59908 start.go:139] virtualization: kvm guest
	I0812 11:46:59.017512   59908 out.go:177] * [default-k8s-diff-port-581883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:46:59.018833   59908 notify.go:220] Checking for updates...
	I0812 11:46:59.018859   59908 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:46:59.020251   59908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:46:59.021646   59908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:46:59.022806   59908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:46:59.024110   59908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:46:59.025476   59908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:46:59.027290   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:46:59.027911   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.028000   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.042960   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0812 11:46:59.043506   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.044010   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.044038   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.044357   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.044528   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.044791   59908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:46:59.045201   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.045244   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.060824   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0812 11:46:59.061268   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.061747   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.061775   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.062156   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.062346   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.101403   59908 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:46:59.102677   59908 start.go:297] selected driver: kvm2
	I0812 11:46:59.102698   59908 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.102863   59908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:46:59.103621   59908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.103690   59908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:46:59.119409   59908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:46:59.119785   59908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:46:59.119848   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:46:59.119862   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:46:59.119900   59908 start.go:340] cluster config:
	{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.120006   59908 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.121814   59908 out.go:177] * Starting "default-k8s-diff-port-581883" primary control-plane node in "default-k8s-diff-port-581883" cluster
	I0812 11:46:59.123067   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:46:59.123111   59908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:46:59.123124   59908 cache.go:56] Caching tarball of preloaded images
	I0812 11:46:59.123213   59908 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:46:59.123228   59908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:46:59.123315   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:46:59.123508   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:46:59.123549   59908 start.go:364] duration metric: took 23.58µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:46:59.123562   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:46:59.123569   59908 fix.go:54] fixHost starting: 
	I0812 11:46:59.123822   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.123852   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.138741   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0812 11:46:59.139136   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.139611   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.139638   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.139938   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.140109   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.140220   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:46:59.141738   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Running err=<nil>
	W0812 11:46:59.141754   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:46:59.143728   59908 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-581883" VM ...
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:56.296673   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:58.297248   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.796584   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.182029   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:02.182240   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:59.144895   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:46:59.144926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.145161   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:46:59.147827   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148278   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:46:59.148305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:46:59.148645   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148820   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148953   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:46:59.149111   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:46:59.149331   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:46:59.149345   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:47:02.045251   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:03.294934   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.295154   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:04.183393   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:06.682752   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.121108   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:07.296302   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.296695   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.182760   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.682727   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.197110   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:11.795381   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:13.795932   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.183038   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:16.183071   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.273141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.297007   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.796402   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.186341   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.682850   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:22.683087   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.349117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:23.421182   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:21.295000   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:23.295712   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.296884   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.183687   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:27.683615   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:27.795828   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.796889   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.683959   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.183115   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.541112   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:47:32.295992   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.795262   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.183370   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:36.682661   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:35.613159   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:36.796467   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.295851   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.182790   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.183829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.693116   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:41.795257   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.795510   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.795595   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.681967   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.684043   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:44.765178   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:48.296050   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.796799   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:48.181748   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.182360   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:52.682975   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.845098   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.917138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.299038   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.796462   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.183044   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:57.685262   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:58.295509   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.795668   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.182427   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:02.682842   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:59.997094   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.069083   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.296463   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.795306   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.182884   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.682408   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.796147   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.296184   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.182124   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:12.182757   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:09.149157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.221135   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.296827   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.796551   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.682524   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:16.682657   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.301111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:17.295545   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:19.295850   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.688121   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.182277   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.373181   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:21.297142   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.798497   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.182636   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:25.682702   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.682936   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.453111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:26.295505   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:28.296105   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.796925   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:29.688759   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:32.182416   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.525184   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:33.295379   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:35.296605   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:34.183273   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.682829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.605187   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:37.796023   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:38.789570   57616 pod_ready.go:81] duration metric: took 4m0.000355544s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:38.789615   57616 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:38.789648   57616 pod_ready.go:38] duration metric: took 4m11.040926567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:38.789687   57616 kubeadm.go:597] duration metric: took 4m21.131138259s to restartPrimaryControlPlane
	W0812 11:48:38.789757   57616 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:38.789794   57616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:38.683163   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:40.683334   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:39.677106   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:43.182845   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:44.677001   56845 pod_ready.go:81] duration metric: took 4m0.0007218s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:44.677024   56845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:44.677041   56845 pod_ready.go:38] duration metric: took 4m12.037310023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:44.677065   56845 kubeadm.go:597] duration metric: took 4m19.591323336s to restartPrimaryControlPlane
	W0812 11:48:44.677114   56845 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:44.677137   56845 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:45.757157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:48.829146   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:54.909142   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:57.981079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:04.870417   57616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.080589185s)
	I0812 11:49:04.870490   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:04.897963   57616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:04.912211   57616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:04.933833   57616 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:04.933861   57616 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:04.933915   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:04.946673   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:04.946756   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:04.960851   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:04.989181   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:04.989259   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:05.002989   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.012600   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:05.012673   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.022301   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:05.031680   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:05.031761   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:05.041453   57616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:05.087039   57616 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:49:05.087106   57616 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:05.195646   57616 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:05.195788   57616 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:05.195909   57616 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:49:05.204565   57616 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:05.207373   57616 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:05.207481   57616 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:05.207573   57616 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:05.207696   57616 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:05.207792   57616 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:05.207896   57616 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:05.207995   57616 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:05.208103   57616 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:05.208195   57616 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:05.208296   57616 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:05.208401   57616 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:05.208456   57616 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:05.208531   57616 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:05.368644   57616 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:05.523403   57616 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:05.656177   57616 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:05.786141   57616 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:05.945607   57616 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:05.946201   57616 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:05.948940   57616 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:05.950857   57616 out.go:204]   - Booting up control plane ...
	I0812 11:49:05.950970   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:05.951060   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:05.952093   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:05.971023   57616 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:05.978207   57616 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:05.978421   57616 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:06.109216   57616 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:06.109362   57616 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:49:04.061117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.133143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.110595   57616 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001459707s
	I0812 11:49:07.110732   57616 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:12.112776   57616 kubeadm.go:310] [api-check] The API server is healthy after 5.002008667s
	I0812 11:49:12.126637   57616 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:12.141115   57616 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:12.166337   57616 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:12.166727   57616 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-993542 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:12.180548   57616 kubeadm.go:310] [bootstrap-token] Using token: jiwh9x.y6rsv6xjvwdwkbct
	I0812 11:49:12.182174   57616 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:12.182276   57616 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:12.191053   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:12.203294   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:12.208858   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:12.215501   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:12.227747   57616 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:12.520136   57616 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:12.964503   57616 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:13.523969   57616 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:13.524831   57616 kubeadm.go:310] 
	I0812 11:49:13.524954   57616 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:13.524973   57616 kubeadm.go:310] 
	I0812 11:49:13.525098   57616 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:13.525113   57616 kubeadm.go:310] 
	I0812 11:49:13.525147   57616 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:13.525220   57616 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:13.525311   57616 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:13.525325   57616 kubeadm.go:310] 
	I0812 11:49:13.525411   57616 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:13.525420   57616 kubeadm.go:310] 
	I0812 11:49:13.525489   57616 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:13.525503   57616 kubeadm.go:310] 
	I0812 11:49:13.525572   57616 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:13.525690   57616 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:13.525780   57616 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:13.525790   57616 kubeadm.go:310] 
	I0812 11:49:13.525905   57616 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:13.526000   57616 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:13.526011   57616 kubeadm.go:310] 
	I0812 11:49:13.526119   57616 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526271   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:13.526307   57616 kubeadm.go:310] 	--control-plane 
	I0812 11:49:13.526317   57616 kubeadm.go:310] 
	I0812 11:49:13.526420   57616 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:13.526429   57616 kubeadm.go:310] 
	I0812 11:49:13.526527   57616 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526653   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:13.527630   57616 kubeadm.go:310] W0812 11:49:05.056260    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528000   57616 kubeadm.go:310] W0812 11:49:05.058135    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528149   57616 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:13.528175   57616 cni.go:84] Creating CNI manager for ""
	I0812 11:49:13.528189   57616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:13.529938   57616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:13.213137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:13.531443   57616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:13.542933   57616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:13.562053   57616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:13.562181   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:13.562196   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-993542 minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=no-preload-993542 minikube.k8s.io/primary=true
	I0812 11:49:13.764006   57616 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:13.764145   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.264728   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.764225   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.264599   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.764919   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.943701   56845 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.266539018s)
	I0812 11:49:15.943778   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:15.959746   56845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:15.970630   56845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:15.980712   56845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:15.980729   56845 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:15.980775   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:15.990070   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:15.990133   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:15.999602   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:16.008767   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:16.008855   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:16.019564   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.028585   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:16.028660   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.037916   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:16.047028   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:16.047087   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:16.056780   56845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:16.104764   56845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:49:16.104848   56845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:16.239085   56845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:16.239218   56845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:16.239309   56845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:16.456581   56845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:16.458619   56845 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:16.458731   56845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:16.458805   56845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:16.458927   56845 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:16.459037   56845 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:16.459121   56845 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:16.459191   56845 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:16.459281   56845 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:16.459385   56845 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:16.459469   56845 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:16.459569   56845 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:16.459643   56845 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:16.459734   56845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:16.579477   56845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:16.765880   56845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:16.885469   56845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:16.955885   56845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:17.091576   56845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:17.092005   56845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:17.094454   56845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:17.096720   56845 out.go:204]   - Booting up control plane ...
	I0812 11:49:17.096850   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:17.096976   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:17.098357   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:17.115656   56845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:17.116069   56845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:17.116128   56845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:17.256475   56845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:17.256550   56845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:49:17.758741   56845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271569ms
	I0812 11:49:17.758818   56845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:16.264606   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:16.764905   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.264989   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.765205   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.265008   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.380060   57616 kubeadm.go:1113] duration metric: took 4.817945872s to wait for elevateKubeSystemPrivileges
	I0812 11:49:18.380107   57616 kubeadm.go:394] duration metric: took 5m0.782175026s to StartCluster
	I0812 11:49:18.380131   57616 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.380237   57616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:18.382942   57616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.383329   57616 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:18.383406   57616 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:18.383564   57616 addons.go:69] Setting storage-provisioner=true in profile "no-preload-993542"
	I0812 11:49:18.383573   57616 addons.go:69] Setting default-storageclass=true in profile "no-preload-993542"
	I0812 11:49:18.383603   57616 addons.go:234] Setting addon storage-provisioner=true in "no-preload-993542"
	W0812 11:49:18.383618   57616 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:18.383620   57616 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:49:18.383634   57616 addons.go:69] Setting metrics-server=true in profile "no-preload-993542"
	I0812 11:49:18.383653   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.383621   57616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-993542"
	I0812 11:49:18.383662   57616 addons.go:234] Setting addon metrics-server=true in "no-preload-993542"
	W0812 11:49:18.383674   57616 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:18.383708   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.384042   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384072   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384089   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384117   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384181   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384211   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.386531   57616 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:18.388412   57616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:18.404269   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0812 11:49:18.404302   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0812 11:49:18.404279   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0812 11:49:18.405011   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405062   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405012   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405601   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405603   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405621   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405636   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405743   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405769   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.406150   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406174   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406184   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406762   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.406786   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.407101   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.407395   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.407420   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.411782   57616 addons.go:234] Setting addon default-storageclass=true in "no-preload-993542"
	W0812 11:49:18.411813   57616 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:18.411843   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.412202   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.412241   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.428999   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0812 11:49:18.429469   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430064   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.430087   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.430147   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0812 11:49:18.430442   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.430500   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430762   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.431525   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.431539   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.431950   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.432152   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.432474   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0812 11:49:18.432876   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.433599   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.433618   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.433872   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434119   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.434381   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434819   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.434875   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.436590   57616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:18.436703   57616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:16.285160   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:18.438442   57616 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.438466   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:18.438489   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.438698   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:18.438713   57616 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:18.438731   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.443927   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.443965   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444276   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444315   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444373   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.444614   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.444790   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444824   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444851   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445055   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.445427   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.445624   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.445776   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445938   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.457462   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 11:49:18.457995   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.458573   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.458602   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.459048   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.459315   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.461486   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.461753   57616 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.461770   57616 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:18.461788   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.465243   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465776   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.465803   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465981   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.466172   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.466325   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.466478   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.649285   57616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:18.666240   57616 node_ready.go:35] waiting up to 6m0s for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675741   57616 node_ready.go:49] node "no-preload-993542" has status "Ready":"True"
	I0812 11:49:18.675769   57616 node_ready.go:38] duration metric: took 9.489483ms for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675781   57616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:18.687934   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:18.762652   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.769504   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:18.769533   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:18.801182   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.815215   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:18.815249   57616 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:18.869830   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:18.869856   57616 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:18.943609   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:19.326108   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326145   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326183   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326200   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326517   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326543   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326558   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326571   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326577   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326580   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326586   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326588   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326597   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326598   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326969   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326997   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327005   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.327232   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327247   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.349315   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.349341   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.349693   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.349737   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.349746   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.620732   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.620765   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621097   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.621143   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621160   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621170   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.621182   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621446   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621469   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621481   57616 addons.go:475] Verifying addon metrics-server=true in "no-preload-993542"
	I0812 11:49:19.624757   57616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:19.626510   57616 addons.go:510] duration metric: took 1.243102289s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:20.695552   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:22.762626   56845 kubeadm.go:310] [api-check] The API server is healthy after 5.002108915s
	I0812 11:49:22.782365   56845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:22.794869   56845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:22.829058   56845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:22.829314   56845 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-093615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:22.842722   56845 kubeadm.go:310] [bootstrap-token] Using token: e42mo3.61s6ofjvy51u5vh7
	I0812 11:49:22.844590   56845 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:22.844745   56845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:22.851804   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:22.861419   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:22.866597   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:22.870810   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:22.886117   56845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:22.365060   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:23.168156   56845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:23.612002   56845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:24.170270   56845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:24.171014   56845 kubeadm.go:310] 
	I0812 11:49:24.171076   56845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:24.171084   56845 kubeadm.go:310] 
	I0812 11:49:24.171146   56845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:24.171153   56845 kubeadm.go:310] 
	I0812 11:49:24.171204   56845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:24.171801   56845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:24.171846   56845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:24.171853   56845 kubeadm.go:310] 
	I0812 11:49:24.171954   56845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:24.171975   56845 kubeadm.go:310] 
	I0812 11:49:24.172039   56845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:24.172051   56845 kubeadm.go:310] 
	I0812 11:49:24.172125   56845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:24.172247   56845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:24.172360   56845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:24.172378   56845 kubeadm.go:310] 
	I0812 11:49:24.172498   56845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:24.172601   56845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:24.172611   56845 kubeadm.go:310] 
	I0812 11:49:24.172772   56845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.172908   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:24.172944   56845 kubeadm.go:310] 	--control-plane 
	I0812 11:49:24.172953   56845 kubeadm.go:310] 
	I0812 11:49:24.173063   56845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:24.173073   56845 kubeadm.go:310] 
	I0812 11:49:24.173209   56845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.173363   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:24.173919   56845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:24.173990   56845 cni.go:84] Creating CNI manager for ""
	I0812 11:49:24.174008   56845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:24.176549   56845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:22.696279   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.194695   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.695175   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:25.695199   57616 pod_ready.go:81] duration metric: took 7.007233179s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.695209   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:24.177809   56845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:24.188175   56845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:24.208060   56845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:24.208152   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.208209   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-093615 minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=embed-certs-093615 minikube.k8s.io/primary=true
	I0812 11:49:24.393211   56845 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:24.393296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.894092   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.394229   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.893667   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.394057   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.893509   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.394296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.893453   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.441104   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:27.702461   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:28.202582   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.202608   57616 pod_ready.go:81] duration metric: took 2.507391262s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.202621   57616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207529   57616 pod_ready.go:92] pod "etcd-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.207551   57616 pod_ready.go:81] duration metric: took 4.923206ms for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207560   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212760   57616 pod_ready.go:92] pod "kube-apiserver-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.212794   57616 pod_ready.go:81] duration metric: took 5.223592ms for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212807   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.216970   57616 pod_ready.go:92] pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.216993   57616 pod_ready.go:81] duration metric: took 4.177186ms for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.217004   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221078   57616 pod_ready.go:92] pod "kube-proxy-8jwkz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.221096   57616 pod_ready.go:81] duration metric: took 4.085629ms for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221105   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600004   57616 pod_ready.go:92] pod "kube-scheduler-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.600031   57616 pod_ready.go:81] duration metric: took 378.92044ms for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600039   57616 pod_ready.go:38] duration metric: took 9.924247425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:28.600053   57616 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:28.600102   57616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:28.615007   57616 api_server.go:72] duration metric: took 10.231634381s to wait for apiserver process to appear ...
	I0812 11:49:28.615043   57616 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:28.615063   57616 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8443/healthz ...
	I0812 11:49:28.620301   57616 api_server.go:279] https://192.168.61.148:8443/healthz returned 200:
	ok
	I0812 11:49:28.621814   57616 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:49:28.621843   57616 api_server.go:131] duration metric: took 6.792657ms to wait for apiserver health ...
	I0812 11:49:28.621858   57616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:28.804172   57616 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:28.804204   57616 system_pods.go:61] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:28.804208   57616 system_pods.go:61] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:28.804213   57616 system_pods.go:61] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:28.804216   57616 system_pods.go:61] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:28.804219   57616 system_pods.go:61] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:28.804224   57616 system_pods.go:61] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:28.804227   57616 system_pods.go:61] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:28.804232   57616 system_pods.go:61] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:28.804236   57616 system_pods.go:61] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:28.804244   57616 system_pods.go:74] duration metric: took 182.379622ms to wait for pod list to return data ...
	I0812 11:49:28.804251   57616 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:28.999537   57616 default_sa.go:45] found service account: "default"
	I0812 11:49:28.999571   57616 default_sa.go:55] duration metric: took 195.31354ms for default service account to be created ...
	I0812 11:49:28.999582   57616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:29.205266   57616 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:29.205296   57616 system_pods.go:89] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:29.205301   57616 system_pods.go:89] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:29.205306   57616 system_pods.go:89] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:29.205310   57616 system_pods.go:89] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:29.205315   57616 system_pods.go:89] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:29.205319   57616 system_pods.go:89] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:29.205323   57616 system_pods.go:89] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:29.205329   57616 system_pods.go:89] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:29.205335   57616 system_pods.go:89] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:29.205342   57616 system_pods.go:126] duration metric: took 205.754437ms to wait for k8s-apps to be running ...
	I0812 11:49:29.205348   57616 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:29.205390   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:29.220297   57616 system_svc.go:56] duration metric: took 14.940181ms WaitForService to wait for kubelet
	I0812 11:49:29.220343   57616 kubeadm.go:582] duration metric: took 10.836962086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:29.220369   57616 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:29.400598   57616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:29.400634   57616 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:29.400648   57616 node_conditions.go:105] duration metric: took 180.272764ms to run NodePressure ...
	I0812 11:49:29.400663   57616 start.go:241] waiting for startup goroutines ...
	I0812 11:49:29.400675   57616 start.go:246] waiting for cluster config update ...
	I0812 11:49:29.400691   57616 start.go:255] writing updated cluster config ...
	I0812 11:49:29.401086   57616 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:29.454975   57616 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:49:29.457349   57616 out.go:177] * Done! kubectl is now configured to use "no-preload-993542" cluster and "default" namespace by default
	I0812 11:49:28.394104   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:28.894284   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.393380   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.893417   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.394034   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.893668   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.394322   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.894069   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.393691   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.893944   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.517192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:33.393880   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:33.894126   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.393857   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.893356   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.394181   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.894116   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.393690   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.893650   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.394325   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.524187   56845 kubeadm.go:1113] duration metric: took 13.316085022s to wait for elevateKubeSystemPrivileges
	I0812 11:49:37.524225   56845 kubeadm.go:394] duration metric: took 5m12.500523071s to StartCluster
	I0812 11:49:37.524246   56845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.524334   56845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:37.526822   56845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.527125   56845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.191 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:37.527189   56845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:37.527272   56845 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-093615"
	I0812 11:49:37.527285   56845 addons.go:69] Setting default-storageclass=true in profile "embed-certs-093615"
	I0812 11:49:37.527307   56845 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-093615"
	I0812 11:49:37.527307   56845 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0812 11:49:37.527315   56845 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:37.527318   56845 addons.go:69] Setting metrics-server=true in profile "embed-certs-093615"
	I0812 11:49:37.527337   56845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-093615"
	I0812 11:49:37.527345   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527362   56845 addons.go:234] Setting addon metrics-server=true in "embed-certs-093615"
	W0812 11:49:37.527375   56845 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:37.527413   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527791   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527816   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527798   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527928   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.528806   56845 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:37.530366   56845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:37.544367   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0812 11:49:37.544919   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0812 11:49:37.545052   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545492   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545535   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.545551   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546095   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.546220   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.546247   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546267   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.547090   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.547667   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.547697   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.548008   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0812 11:49:37.550024   56845 addons.go:234] Setting addon default-storageclass=true in "embed-certs-093615"
	W0812 11:49:37.550048   56845 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:37.550079   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.550469   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.550500   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.550728   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.551342   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.551373   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.551748   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.552314   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.552354   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.566505   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0812 11:49:37.567085   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.567510   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.567526   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.567900   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.568133   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.570307   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.571789   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0812 11:49:37.572127   56845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:37.572191   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.572730   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.572752   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.573044   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0812 11:49:37.573231   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.573619   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.573815   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.573840   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.573849   56845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.573870   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:37.573890   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.574787   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.574809   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.575722   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.575937   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.578054   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578069   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.578536   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.578565   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578833   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.579012   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.579170   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.579326   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.580007   56845 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:37.581298   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:37.581313   56845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:37.581334   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.585114   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585809   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.585839   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585914   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.586160   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.586338   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.586476   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.591678   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0812 11:49:37.592146   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.592684   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.592702   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.593075   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.593241   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.595117   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.595398   56845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.595413   56845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:37.595430   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.598417   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.598771   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.598792   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.599008   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.599209   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.599369   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.599507   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.757714   56845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:37.783594   56845 node_ready.go:35] waiting up to 6m0s for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801679   56845 node_ready.go:49] node "embed-certs-093615" has status "Ready":"True"
	I0812 11:49:37.801707   56845 node_ready.go:38] duration metric: took 18.078817ms for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801719   56845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:37.814704   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:37.860064   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.913642   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:37.913673   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:37.932638   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.948027   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:37.948052   56845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:38.000773   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.000805   56845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:38.050478   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.655431   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655458   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655477   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655460   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655760   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655875   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655888   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655897   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655792   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655971   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655979   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655986   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655812   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.655832   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656156   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656161   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656172   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.656199   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656225   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656231   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707240   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.707268   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.707596   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.707618   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707667   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.832725   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.832758   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833072   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833114   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833134   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833155   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.833165   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833416   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833461   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833472   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833483   56845 addons.go:475] Verifying addon metrics-server=true in "embed-certs-093615"
	I0812 11:49:38.835319   56845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:34.589171   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:38.836977   56845 addons.go:510] duration metric: took 1.309786928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:39.827672   56845 pod_ready.go:102] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:40.820793   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.820818   56845 pod_ready.go:81] duration metric: took 3.006078866s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.820828   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825674   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.825696   56845 pod_ready.go:81] duration metric: took 4.862671ms for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825705   56845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830668   56845 pod_ready.go:92] pod "etcd-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.830690   56845 pod_ready.go:81] duration metric: took 4.979449ms for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830699   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834732   56845 pod_ready.go:92] pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.834750   56845 pod_ready.go:81] duration metric: took 4.044023ms for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834759   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838476   56845 pod_ready.go:92] pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.838493   56845 pod_ready.go:81] duration metric: took 3.728686ms for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838502   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219756   56845 pod_ready.go:92] pod "kube-proxy-26xvl" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.219778   56845 pod_ready.go:81] duration metric: took 381.271425ms for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219789   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619078   56845 pod_ready.go:92] pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.619107   56845 pod_ready.go:81] duration metric: took 399.30989ms for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619117   56845 pod_ready.go:38] duration metric: took 3.817386457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:41.619135   56845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:41.619197   56845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:41.634452   56845 api_server.go:72] duration metric: took 4.107285578s to wait for apiserver process to appear ...
	I0812 11:49:41.634480   56845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:41.634505   56845 api_server.go:253] Checking apiserver healthz at https://192.168.72.191:8443/healthz ...
	I0812 11:49:41.639610   56845 api_server.go:279] https://192.168.72.191:8443/healthz returned 200:
	ok
	I0812 11:49:41.640514   56845 api_server.go:141] control plane version: v1.30.3
	I0812 11:49:41.640537   56845 api_server.go:131] duration metric: took 6.049802ms to wait for apiserver health ...
	I0812 11:49:41.640547   56845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:41.823614   56845 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:41.823652   56845 system_pods.go:61] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:41.823659   56845 system_pods.go:61] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:41.823665   56845 system_pods.go:61] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:41.823670   56845 system_pods.go:61] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:41.823675   56845 system_pods.go:61] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:41.823680   56845 system_pods.go:61] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:41.823685   56845 system_pods.go:61] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:41.823693   56845 system_pods.go:61] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:41.823697   56845 system_pods.go:61] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:41.823704   56845 system_pods.go:74] duration metric: took 183.151482ms to wait for pod list to return data ...
	I0812 11:49:41.823711   56845 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:42.017840   56845 default_sa.go:45] found service account: "default"
	I0812 11:49:42.017870   56845 default_sa.go:55] duration metric: took 194.151916ms for default service account to be created ...
	I0812 11:49:42.017886   56845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:42.222050   56845 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:42.222084   56845 system_pods.go:89] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:42.222092   56845 system_pods.go:89] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:42.222098   56845 system_pods.go:89] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:42.222104   56845 system_pods.go:89] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:42.222110   56845 system_pods.go:89] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:42.222116   56845 system_pods.go:89] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:42.222122   56845 system_pods.go:89] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:42.222133   56845 system_pods.go:89] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:42.222140   56845 system_pods.go:89] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:42.222157   56845 system_pods.go:126] duration metric: took 204.263322ms to wait for k8s-apps to be running ...
	I0812 11:49:42.222169   56845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:42.222224   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:42.235891   56845 system_svc.go:56] duration metric: took 13.715083ms WaitForService to wait for kubelet
	I0812 11:49:42.235920   56845 kubeadm.go:582] duration metric: took 4.708757648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:42.235945   56845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:42.418727   56845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:42.418761   56845 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:42.418773   56845 node_conditions.go:105] duration metric: took 182.823582ms to run NodePressure ...
	I0812 11:49:42.418789   56845 start.go:241] waiting for startup goroutines ...
	I0812 11:49:42.418799   56845 start.go:246] waiting for cluster config update ...
	I0812 11:49:42.418812   56845 start.go:255] writing updated cluster config ...
	I0812 11:49:42.419150   56845 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:42.468981   56845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:49:42.471931   56845 out.go:177] * Done! kubectl is now configured to use "embed-certs-093615" cluster and "default" namespace by default
	I0812 11:49:40.669207   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:43.741090   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:49.821138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:52.893281   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:58.973141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:02.045165   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:08.129133   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:11.197137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:17.277119   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:20.349149   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:26.429100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:29.501158   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:35.581137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:38.653143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:44.733130   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:47.805192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:53.885100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:56.957154   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:03.037201   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:06.109079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:12.189138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:15.261132   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 
	
	
	==> CRI-O <==
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.177501938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463484177475741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dcff583-f61e-4ce4-b71c-331c42550b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.177971249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4daeb7b-390e-4b82-a16f-ba96f1cab04b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.178027062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4daeb7b-390e-4b82-a16f-ba96f1cab04b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.178056407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b4daeb7b-390e-4b82-a16f-ba96f1cab04b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.212156707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd414673-19e1-43ad-bcf5-201868c2d91d name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.212276115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd414673-19e1-43ad-bcf5-201868c2d91d name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.213354883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0795d872-9050-4e20-867d-2910f7d8eb5e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.213832130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463484213806987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0795d872-9050-4e20-867d-2910f7d8eb5e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.214320702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ed9329f-1c71-4491-937f-64c201b27610 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.214372421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ed9329f-1c71-4491-937f-64c201b27610 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.214404327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7ed9329f-1c71-4491-937f-64c201b27610 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.246656685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7740c631-6189-4dce-9bf7-030367157c41 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.246764834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7740c631-6189-4dce-9bf7-030367157c41 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.248074376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d64a404-6a49-439c-828d-03075a4c226b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.248539546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463484248514956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d64a404-6a49-439c-828d-03075a4c226b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.249227708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48fa2b97-e939-4216-a3e4-33930dc86e48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.249299121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48fa2b97-e939-4216-a3e4-33930dc86e48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.249346527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=48fa2b97-e939-4216-a3e4-33930dc86e48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.280845495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2998b8c1-3797-4145-817c-3f3d09f67509 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.280917028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2998b8c1-3797-4145-817c-3f3d09f67509 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.282606057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4691aea0-d9ec-4e8f-96d3-546efa3eee44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.282980347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463484282956740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4691aea0-d9ec-4e8f-96d3-546efa3eee44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.283451605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ac762cb-be8f-45cd-8067-ab78aa8df686 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.283503810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ac762cb-be8f-45cd-8067-ab78aa8df686 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:51:24 old-k8s-version-835962 crio[649]: time="2024-08-12 11:51:24.283537896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3ac762cb-be8f-45cd-8067-ab78aa8df686 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug12 11:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.558019] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.216104] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.055590] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052853] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.197707] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.118940] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.224588] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.260019] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.065050] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.865114] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +14.292569] kauditd_printk_skb: 46 callbacks suppressed
	[Aug12 11:47] systemd-fstab-generator[5053]: Ignoring "noauto" option for root device
	[Aug12 11:49] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.063898] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:51:24 up 8 min,  0 users,  load average: 0.01, 0.08, 0.06
	Linux old-k8s-version-835962 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000cec540, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c21d40, 0x24, 0x0, ...)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: net.(*Dialer).DialContext(0xc000b3b7a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c21d40, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b48080, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c21d40, 0x24, 0x60, 0x7f16dbd34430, 0x118, ...)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: net/http.(*Transport).dial(0xc000626000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c21d40, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: net/http.(*Transport).dialConn(0xc000626000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000a3a3c0, 0x5, 0xc000c21d40, 0x24, 0x0, 0xc000c80b40, ...)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: net/http.(*Transport).dialConnFor(0xc000626000, 0xc000c104d0)
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]: created by net/http.(*Transport).queueForDial
	Aug 12 11:51:22 old-k8s-version-835962 kubelet[5521]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 12 11:51:22 old-k8s-version-835962 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 12 11:51:22 old-k8s-version-835962 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 12 11:51:23 old-k8s-version-835962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 12 11:51:23 old-k8s-version-835962 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 12 11:51:23 old-k8s-version-835962 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 12 11:51:23 old-k8s-version-835962 kubelet[5577]: I0812 11:51:23.128213    5577 server.go:416] Version: v1.20.0
	Aug 12 11:51:23 old-k8s-version-835962 kubelet[5577]: I0812 11:51:23.128596    5577 server.go:837] Client rotation is on, will bootstrap in background
	Aug 12 11:51:23 old-k8s-version-835962 kubelet[5577]: I0812 11:51:23.130729    5577 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 12 11:51:23 old-k8s-version-835962 kubelet[5577]: I0812 11:51:23.132035    5577 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 12 11:51:23 old-k8s-version-835962 kubelet[5577]: W0812 11:51:23.132045    5577 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (228.835766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-835962" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (740.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542: exit status 3 (3.167906409s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:39:16.893263   57505 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E0812 11:39:16.893286   57505 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-993542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-993542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152816846s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-993542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542: exit status 3 (3.062816602s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:39:26.109285   57586 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E0812 11:39:26.109337   57586 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-993542" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-581883 --alsologtostderr -v=3
E0812 11:45:45.936349   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-581883 --alsologtostderr -v=3: exit status 82 (2m0.584642644s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-581883"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:44:27.537495   59270 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:44:27.537643   59270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:44:27.537653   59270 out.go:304] Setting ErrFile to fd 2...
	I0812 11:44:27.537660   59270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:44:27.537890   59270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:44:27.538204   59270 out.go:298] Setting JSON to false
	I0812 11:44:27.538461   59270 mustload.go:65] Loading cluster: default-k8s-diff-port-581883
	I0812 11:44:27.538934   59270 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:44:27.539029   59270 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:44:27.539204   59270 mustload.go:65] Loading cluster: default-k8s-diff-port-581883
	I0812 11:44:27.539327   59270 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:44:27.539361   59270 stop.go:39] StopHost: default-k8s-diff-port-581883
	I0812 11:44:27.539845   59270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:44:27.539929   59270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:44:27.557363   59270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I0812 11:44:27.557870   59270 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:44:27.558478   59270 main.go:141] libmachine: Using API Version  1
	I0812 11:44:27.558493   59270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:44:27.558916   59270 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:44:27.560952   59270 out.go:177] * Stopping node "default-k8s-diff-port-581883"  ...
	I0812 11:44:27.562583   59270 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 11:44:27.562614   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:44:27.562974   59270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 11:44:27.563010   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:44:27.566070   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:44:27.566466   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:44:27.566493   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:44:27.566729   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:44:27.566966   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:44:27.567169   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:44:27.567436   59270 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:44:27.693313   59270 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 11:44:27.749055   59270 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 11:44:27.846924   59270 main.go:141] libmachine: Stopping "default-k8s-diff-port-581883"...
	I0812 11:44:27.846962   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:44:27.849155   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Stop
	I0812 11:44:27.853867   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 0/120
	I0812 11:44:28.855875   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 1/120
	I0812 11:44:29.857471   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 2/120
	I0812 11:44:30.859346   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 3/120
	I0812 11:44:31.861002   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 4/120
	I0812 11:44:32.863368   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 5/120
	I0812 11:44:33.864880   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 6/120
	I0812 11:44:34.866605   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 7/120
	I0812 11:44:35.868011   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 8/120
	I0812 11:44:36.869650   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 9/120
	I0812 11:44:37.871684   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 10/120
	I0812 11:44:38.873207   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 11/120
	I0812 11:44:39.874565   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 12/120
	I0812 11:44:40.876998   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 13/120
	I0812 11:44:41.878596   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 14/120
	I0812 11:44:42.880832   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 15/120
	I0812 11:44:43.882641   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 16/120
	I0812 11:44:44.884144   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 17/120
	I0812 11:44:45.885615   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 18/120
	I0812 11:44:46.887277   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 19/120
	I0812 11:44:47.889092   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 20/120
	I0812 11:44:48.891844   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 21/120
	I0812 11:44:49.894339   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 22/120
	I0812 11:44:50.895781   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 23/120
	I0812 11:44:51.897535   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 24/120
	I0812 11:44:52.899459   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 25/120
	I0812 11:44:53.901802   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 26/120
	I0812 11:44:54.903418   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 27/120
	I0812 11:44:55.905497   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 28/120
	I0812 11:44:56.907050   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 29/120
	I0812 11:44:57.909697   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 30/120
	I0812 11:44:58.911513   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 31/120
	I0812 11:44:59.912795   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 32/120
	I0812 11:45:00.914187   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 33/120
	I0812 11:45:01.915956   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 34/120
	I0812 11:45:02.917705   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 35/120
	I0812 11:45:03.919069   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 36/120
	I0812 11:45:04.920400   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 37/120
	I0812 11:45:05.922980   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 38/120
	I0812 11:45:06.924132   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 39/120
	I0812 11:45:07.926518   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 40/120
	I0812 11:45:08.927867   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 41/120
	I0812 11:45:09.929301   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 42/120
	I0812 11:45:10.931364   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 43/120
	I0812 11:45:11.932776   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 44/120
	I0812 11:45:12.934079   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 45/120
	I0812 11:45:13.935579   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 46/120
	I0812 11:45:14.937078   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 47/120
	I0812 11:45:15.939717   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 48/120
	I0812 11:45:16.941381   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 49/120
	I0812 11:45:17.943360   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 50/120
	I0812 11:45:18.945568   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 51/120
	I0812 11:45:19.947463   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 52/120
	I0812 11:45:20.949117   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 53/120
	I0812 11:45:21.951356   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 54/120
	I0812 11:45:22.953518   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 55/120
	I0812 11:45:23.955816   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 56/120
	I0812 11:45:24.957128   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 57/120
	I0812 11:45:25.958394   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 58/120
	I0812 11:45:26.960128   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 59/120
	I0812 11:45:27.962233   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 60/120
	I0812 11:45:28.963570   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 61/120
	I0812 11:45:29.965245   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 62/120
	I0812 11:45:30.966731   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 63/120
	I0812 11:45:31.968262   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 64/120
	I0812 11:45:32.970170   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 65/120
	I0812 11:45:33.971589   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 66/120
	I0812 11:45:34.973090   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 67/120
	I0812 11:45:35.975369   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 68/120
	I0812 11:45:36.976852   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 69/120
	I0812 11:45:37.978991   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 70/120
	I0812 11:45:38.980776   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 71/120
	I0812 11:45:39.982296   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 72/120
	I0812 11:45:40.983897   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 73/120
	I0812 11:45:41.985503   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 74/120
	I0812 11:45:42.987548   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 75/120
	I0812 11:45:43.989301   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 76/120
	I0812 11:45:44.990868   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 77/120
	I0812 11:45:45.992362   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 78/120
	I0812 11:45:46.993693   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 79/120
	I0812 11:45:47.995917   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 80/120
	I0812 11:45:48.997370   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 81/120
	I0812 11:45:49.998688   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 82/120
	I0812 11:45:51.000210   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 83/120
	I0812 11:45:52.001648   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 84/120
	I0812 11:45:53.003842   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 85/120
	I0812 11:45:54.006522   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 86/120
	I0812 11:45:55.007891   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 87/120
	I0812 11:45:56.009351   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 88/120
	I0812 11:45:57.011323   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 89/120
	I0812 11:45:58.013224   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 90/120
	I0812 11:45:59.015537   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 91/120
	I0812 11:46:00.016976   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 92/120
	I0812 11:46:01.018321   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 93/120
	I0812 11:46:02.019854   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 94/120
	I0812 11:46:03.022046   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 95/120
	I0812 11:46:04.024380   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 96/120
	I0812 11:46:05.025613   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 97/120
	I0812 11:46:06.027382   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 98/120
	I0812 11:46:07.028656   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 99/120
	I0812 11:46:08.030713   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 100/120
	I0812 11:46:09.031953   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 101/120
	I0812 11:46:10.033463   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 102/120
	I0812 11:46:11.034869   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 103/120
	I0812 11:46:12.036207   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 104/120
	I0812 11:46:13.038239   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 105/120
	I0812 11:46:14.039738   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 106/120
	I0812 11:46:15.041283   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 107/120
	I0812 11:46:16.042634   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 108/120
	I0812 11:46:17.044113   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 109/120
	I0812 11:46:18.046439   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 110/120
	I0812 11:46:19.047926   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 111/120
	I0812 11:46:20.049383   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 112/120
	I0812 11:46:21.051040   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 113/120
	I0812 11:46:22.052465   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 114/120
	I0812 11:46:23.054726   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 115/120
	I0812 11:46:24.056256   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 116/120
	I0812 11:46:25.057896   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 117/120
	I0812 11:46:26.059400   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 118/120
	I0812 11:46:27.060817   59270 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for machine to stop 119/120
	I0812 11:46:28.062164   59270 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 11:46:28.062227   59270 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 11:46:28.064158   59270 out.go:177] 
	W0812 11:46:28.065555   59270 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 11:46:28.065572   59270 out.go:239] * 
	* 
	W0812 11:46:28.068438   59270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:46:28.069848   59270 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-581883 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883: exit status 3 (18.521211325s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:46:46.593237   59703 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0812 11:46:46.593262   59703 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-581883" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883: exit status 3 (3.163732183s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:46:49.757234   59782 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0812 11:46:49.757259   59782 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-581883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-581883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153336025s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-581883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883: exit status 3 (3.062507336s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 11:46:58.973259   59862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0812 11:46:58.973285   59862 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-581883" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993542 -n no-preload-993542
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-12 11:58:29.985942734 +0000 UTC m=+5899.142865308
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-993542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-993542 logs -n 25: (1.34454494s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:46:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:46:59.013199   59908 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:46:59.013476   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013486   59908 out.go:304] Setting ErrFile to fd 2...
	I0812 11:46:59.013490   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013689   59908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:46:59.014204   59908 out.go:298] Setting JSON to false
	I0812 11:46:59.015302   59908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5360,"bootTime":1723457859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:46:59.015368   59908 start.go:139] virtualization: kvm guest
	I0812 11:46:59.017512   59908 out.go:177] * [default-k8s-diff-port-581883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:46:59.018833   59908 notify.go:220] Checking for updates...
	I0812 11:46:59.018859   59908 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:46:59.020251   59908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:46:59.021646   59908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:46:59.022806   59908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:46:59.024110   59908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:46:59.025476   59908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:46:59.027290   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:46:59.027911   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.028000   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.042960   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0812 11:46:59.043506   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.044010   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.044038   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.044357   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.044528   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.044791   59908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:46:59.045201   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.045244   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.060824   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0812 11:46:59.061268   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.061747   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.061775   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.062156   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.062346   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.101403   59908 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:46:59.102677   59908 start.go:297] selected driver: kvm2
	I0812 11:46:59.102698   59908 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.102863   59908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:46:59.103621   59908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.103690   59908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:46:59.119409   59908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:46:59.119785   59908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:46:59.119848   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:46:59.119862   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:46:59.119900   59908 start.go:340] cluster config:
	{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.120006   59908 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.121814   59908 out.go:177] * Starting "default-k8s-diff-port-581883" primary control-plane node in "default-k8s-diff-port-581883" cluster
	I0812 11:46:59.123067   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:46:59.123111   59908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:46:59.123124   59908 cache.go:56] Caching tarball of preloaded images
	I0812 11:46:59.123213   59908 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:46:59.123228   59908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:46:59.123315   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:46:59.123508   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:46:59.123549   59908 start.go:364] duration metric: took 23.58µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:46:59.123562   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:46:59.123569   59908 fix.go:54] fixHost starting: 
	I0812 11:46:59.123822   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.123852   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.138741   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0812 11:46:59.139136   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.139611   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.139638   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.139938   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.140109   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.140220   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:46:59.141738   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Running err=<nil>
	W0812 11:46:59.141754   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:46:59.143728   59908 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-581883" VM ...
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:56.296673   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:58.297248   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.796584   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.182029   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:02.182240   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:59.144895   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:46:59.144926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.145161   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:46:59.147827   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148278   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:46:59.148305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:46:59.148645   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148820   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148953   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:46:59.149111   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:46:59.149331   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:46:59.149345   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:47:02.045251   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:03.294934   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.295154   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:04.183393   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:06.682752   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.121108   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:07.296302   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.296695   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.182760   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.682727   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.197110   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:11.795381   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:13.795932   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.183038   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:16.183071   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.273141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.297007   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.796402   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.186341   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.682850   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:22.683087   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.349117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:23.421182   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:21.295000   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:23.295712   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.296884   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.183687   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:27.683615   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:27.795828   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.796889   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.683959   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.183115   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.541112   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:47:32.295992   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.795262   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.183370   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:36.682661   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:35.613159   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:36.796467   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.295851   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.182790   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.183829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.693116   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:41.795257   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.795510   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.795595   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.681967   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.684043   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:44.765178   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:48.296050   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.796799   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:48.181748   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.182360   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:52.682975   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.845098   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.917138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.299038   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.796462   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.183044   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:57.685262   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:58.295509   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.795668   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.182427   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:02.682842   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:59.997094   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.069083   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.296463   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.795306   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.182884   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.682408   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.796147   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.296184   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.182124   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:12.182757   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:09.149157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.221135   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.296827   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.796551   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.682524   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:16.682657   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.301111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:17.295545   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:19.295850   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.688121   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.182277   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.373181   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:21.297142   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.798497   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.182636   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:25.682702   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.682936   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.453111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:26.295505   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:28.296105   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.796925   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:29.688759   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:32.182416   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.525184   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:33.295379   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:35.296605   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:34.183273   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.682829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.605187   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:37.796023   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:38.789570   57616 pod_ready.go:81] duration metric: took 4m0.000355544s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:38.789615   57616 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:38.789648   57616 pod_ready.go:38] duration metric: took 4m11.040926567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:38.789687   57616 kubeadm.go:597] duration metric: took 4m21.131138259s to restartPrimaryControlPlane
	W0812 11:48:38.789757   57616 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:38.789794   57616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:38.683163   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:40.683334   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:39.677106   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:43.182845   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:44.677001   56845 pod_ready.go:81] duration metric: took 4m0.0007218s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:44.677024   56845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:44.677041   56845 pod_ready.go:38] duration metric: took 4m12.037310023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:44.677065   56845 kubeadm.go:597] duration metric: took 4m19.591323336s to restartPrimaryControlPlane
	W0812 11:48:44.677114   56845 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:44.677137   56845 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:45.757157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:48.829146   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:54.909142   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:57.981079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:04.870417   57616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.080589185s)
	I0812 11:49:04.870490   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:04.897963   57616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:04.912211   57616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:04.933833   57616 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:04.933861   57616 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:04.933915   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:04.946673   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:04.946756   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:04.960851   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:04.989181   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:04.989259   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:05.002989   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.012600   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:05.012673   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.022301   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:05.031680   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:05.031761   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:05.041453   57616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:05.087039   57616 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:49:05.087106   57616 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:05.195646   57616 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:05.195788   57616 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:05.195909   57616 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:49:05.204565   57616 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:05.207373   57616 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:05.207481   57616 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:05.207573   57616 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:05.207696   57616 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:05.207792   57616 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:05.207896   57616 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:05.207995   57616 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:05.208103   57616 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:05.208195   57616 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:05.208296   57616 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:05.208401   57616 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:05.208456   57616 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:05.208531   57616 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:05.368644   57616 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:05.523403   57616 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:05.656177   57616 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:05.786141   57616 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:05.945607   57616 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:05.946201   57616 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:05.948940   57616 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:05.950857   57616 out.go:204]   - Booting up control plane ...
	I0812 11:49:05.950970   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:05.951060   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:05.952093   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:05.971023   57616 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:05.978207   57616 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:05.978421   57616 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:06.109216   57616 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:06.109362   57616 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:49:04.061117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.133143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.110595   57616 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001459707s
	I0812 11:49:07.110732   57616 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:12.112776   57616 kubeadm.go:310] [api-check] The API server is healthy after 5.002008667s
	I0812 11:49:12.126637   57616 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:12.141115   57616 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:12.166337   57616 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:12.166727   57616 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-993542 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:12.180548   57616 kubeadm.go:310] [bootstrap-token] Using token: jiwh9x.y6rsv6xjvwdwkbct
	I0812 11:49:12.182174   57616 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:12.182276   57616 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:12.191053   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:12.203294   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:12.208858   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:12.215501   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:12.227747   57616 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:12.520136   57616 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:12.964503   57616 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:13.523969   57616 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:13.524831   57616 kubeadm.go:310] 
	I0812 11:49:13.524954   57616 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:13.524973   57616 kubeadm.go:310] 
	I0812 11:49:13.525098   57616 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:13.525113   57616 kubeadm.go:310] 
	I0812 11:49:13.525147   57616 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:13.525220   57616 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:13.525311   57616 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:13.525325   57616 kubeadm.go:310] 
	I0812 11:49:13.525411   57616 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:13.525420   57616 kubeadm.go:310] 
	I0812 11:49:13.525489   57616 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:13.525503   57616 kubeadm.go:310] 
	I0812 11:49:13.525572   57616 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:13.525690   57616 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:13.525780   57616 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:13.525790   57616 kubeadm.go:310] 
	I0812 11:49:13.525905   57616 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:13.526000   57616 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:13.526011   57616 kubeadm.go:310] 
	I0812 11:49:13.526119   57616 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526271   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:13.526307   57616 kubeadm.go:310] 	--control-plane 
	I0812 11:49:13.526317   57616 kubeadm.go:310] 
	I0812 11:49:13.526420   57616 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:13.526429   57616 kubeadm.go:310] 
	I0812 11:49:13.526527   57616 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526653   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:13.527630   57616 kubeadm.go:310] W0812 11:49:05.056260    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528000   57616 kubeadm.go:310] W0812 11:49:05.058135    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528149   57616 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:13.528175   57616 cni.go:84] Creating CNI manager for ""
	I0812 11:49:13.528189   57616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:13.529938   57616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:13.213137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:13.531443   57616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:13.542933   57616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:13.562053   57616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:13.562181   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:13.562196   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-993542 minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=no-preload-993542 minikube.k8s.io/primary=true
	I0812 11:49:13.764006   57616 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:13.764145   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.264728   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.764225   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.264599   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.764919   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.943701   56845 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.266539018s)
	I0812 11:49:15.943778   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:15.959746   56845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:15.970630   56845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:15.980712   56845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:15.980729   56845 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:15.980775   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:15.990070   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:15.990133   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:15.999602   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:16.008767   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:16.008855   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:16.019564   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.028585   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:16.028660   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.037916   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:16.047028   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:16.047087   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:16.056780   56845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:16.104764   56845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:49:16.104848   56845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:16.239085   56845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:16.239218   56845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:16.239309   56845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:16.456581   56845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:16.458619   56845 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:16.458731   56845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:16.458805   56845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:16.458927   56845 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:16.459037   56845 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:16.459121   56845 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:16.459191   56845 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:16.459281   56845 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:16.459385   56845 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:16.459469   56845 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:16.459569   56845 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:16.459643   56845 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:16.459734   56845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:16.579477   56845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:16.765880   56845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:16.885469   56845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:16.955885   56845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:17.091576   56845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:17.092005   56845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:17.094454   56845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:17.096720   56845 out.go:204]   - Booting up control plane ...
	I0812 11:49:17.096850   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:17.096976   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:17.098357   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:17.115656   56845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:17.116069   56845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:17.116128   56845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:17.256475   56845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:17.256550   56845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:49:17.758741   56845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271569ms
	I0812 11:49:17.758818   56845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:16.264606   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:16.764905   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.264989   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.765205   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.265008   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.380060   57616 kubeadm.go:1113] duration metric: took 4.817945872s to wait for elevateKubeSystemPrivileges
	I0812 11:49:18.380107   57616 kubeadm.go:394] duration metric: took 5m0.782175026s to StartCluster
	I0812 11:49:18.380131   57616 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.380237   57616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:18.382942   57616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.383329   57616 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:18.383406   57616 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:18.383564   57616 addons.go:69] Setting storage-provisioner=true in profile "no-preload-993542"
	I0812 11:49:18.383573   57616 addons.go:69] Setting default-storageclass=true in profile "no-preload-993542"
	I0812 11:49:18.383603   57616 addons.go:234] Setting addon storage-provisioner=true in "no-preload-993542"
	W0812 11:49:18.383618   57616 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:18.383620   57616 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:49:18.383634   57616 addons.go:69] Setting metrics-server=true in profile "no-preload-993542"
	I0812 11:49:18.383653   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.383621   57616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-993542"
	I0812 11:49:18.383662   57616 addons.go:234] Setting addon metrics-server=true in "no-preload-993542"
	W0812 11:49:18.383674   57616 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:18.383708   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.384042   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384072   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384089   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384117   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384181   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384211   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.386531   57616 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:18.388412   57616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:18.404269   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0812 11:49:18.404302   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0812 11:49:18.404279   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0812 11:49:18.405011   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405062   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405012   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405601   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405603   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405621   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405636   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405743   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405769   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.406150   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406174   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406184   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406762   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.406786   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.407101   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.407395   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.407420   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.411782   57616 addons.go:234] Setting addon default-storageclass=true in "no-preload-993542"
	W0812 11:49:18.411813   57616 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:18.411843   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.412202   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.412241   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.428999   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0812 11:49:18.429469   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430064   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.430087   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.430147   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0812 11:49:18.430442   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.430500   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430762   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.431525   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.431539   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.431950   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.432152   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.432474   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0812 11:49:18.432876   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.433599   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.433618   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.433872   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434119   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.434381   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434819   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.434875   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.436590   57616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:18.436703   57616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:16.285160   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:18.438442   57616 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.438466   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:18.438489   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.438698   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:18.438713   57616 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:18.438731   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.443927   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.443965   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444276   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444315   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444373   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.444614   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.444790   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444824   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444851   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445055   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.445427   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.445624   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.445776   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445938   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.457462   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 11:49:18.457995   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.458573   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.458602   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.459048   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.459315   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.461486   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.461753   57616 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.461770   57616 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:18.461788   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.465243   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465776   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.465803   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465981   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.466172   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.466325   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.466478   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.649285   57616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:18.666240   57616 node_ready.go:35] waiting up to 6m0s for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675741   57616 node_ready.go:49] node "no-preload-993542" has status "Ready":"True"
	I0812 11:49:18.675769   57616 node_ready.go:38] duration metric: took 9.489483ms for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675781   57616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:18.687934   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:18.762652   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.769504   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:18.769533   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:18.801182   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.815215   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:18.815249   57616 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:18.869830   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:18.869856   57616 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:18.943609   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:19.326108   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326145   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326183   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326200   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326517   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326543   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326558   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326571   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326577   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326580   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326586   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326588   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326597   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326598   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326969   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326997   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327005   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.327232   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327247   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.349315   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.349341   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.349693   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.349737   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.349746   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.620732   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.620765   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621097   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.621143   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621160   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621170   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.621182   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621446   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621469   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621481   57616 addons.go:475] Verifying addon metrics-server=true in "no-preload-993542"
	I0812 11:49:19.624757   57616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:19.626510   57616 addons.go:510] duration metric: took 1.243102289s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:20.695552   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:22.762626   56845 kubeadm.go:310] [api-check] The API server is healthy after 5.002108915s
	I0812 11:49:22.782365   56845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:22.794869   56845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:22.829058   56845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:22.829314   56845 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-093615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:22.842722   56845 kubeadm.go:310] [bootstrap-token] Using token: e42mo3.61s6ofjvy51u5vh7
	I0812 11:49:22.844590   56845 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:22.844745   56845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:22.851804   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:22.861419   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:22.866597   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:22.870810   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:22.886117   56845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:22.365060   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:23.168156   56845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:23.612002   56845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:24.170270   56845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:24.171014   56845 kubeadm.go:310] 
	I0812 11:49:24.171076   56845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:24.171084   56845 kubeadm.go:310] 
	I0812 11:49:24.171146   56845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:24.171153   56845 kubeadm.go:310] 
	I0812 11:49:24.171204   56845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:24.171801   56845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:24.171846   56845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:24.171853   56845 kubeadm.go:310] 
	I0812 11:49:24.171954   56845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:24.171975   56845 kubeadm.go:310] 
	I0812 11:49:24.172039   56845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:24.172051   56845 kubeadm.go:310] 
	I0812 11:49:24.172125   56845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:24.172247   56845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:24.172360   56845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:24.172378   56845 kubeadm.go:310] 
	I0812 11:49:24.172498   56845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:24.172601   56845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:24.172611   56845 kubeadm.go:310] 
	I0812 11:49:24.172772   56845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.172908   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:24.172944   56845 kubeadm.go:310] 	--control-plane 
	I0812 11:49:24.172953   56845 kubeadm.go:310] 
	I0812 11:49:24.173063   56845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:24.173073   56845 kubeadm.go:310] 
	I0812 11:49:24.173209   56845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.173363   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:24.173919   56845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:24.173990   56845 cni.go:84] Creating CNI manager for ""
	I0812 11:49:24.174008   56845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:24.176549   56845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:22.696279   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.194695   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.695175   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:25.695199   57616 pod_ready.go:81] duration metric: took 7.007233179s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.695209   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:24.177809   56845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:24.188175   56845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:24.208060   56845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:24.208152   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.208209   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-093615 minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=embed-certs-093615 minikube.k8s.io/primary=true
	I0812 11:49:24.393211   56845 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:24.393296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.894092   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.394229   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.893667   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.394057   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.893509   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.394296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.893453   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.441104   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:27.702461   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:28.202582   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.202608   57616 pod_ready.go:81] duration metric: took 2.507391262s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.202621   57616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207529   57616 pod_ready.go:92] pod "etcd-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.207551   57616 pod_ready.go:81] duration metric: took 4.923206ms for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207560   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212760   57616 pod_ready.go:92] pod "kube-apiserver-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.212794   57616 pod_ready.go:81] duration metric: took 5.223592ms for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212807   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.216970   57616 pod_ready.go:92] pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.216993   57616 pod_ready.go:81] duration metric: took 4.177186ms for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.217004   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221078   57616 pod_ready.go:92] pod "kube-proxy-8jwkz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.221096   57616 pod_ready.go:81] duration metric: took 4.085629ms for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221105   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600004   57616 pod_ready.go:92] pod "kube-scheduler-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.600031   57616 pod_ready.go:81] duration metric: took 378.92044ms for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600039   57616 pod_ready.go:38] duration metric: took 9.924247425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:28.600053   57616 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:28.600102   57616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:28.615007   57616 api_server.go:72] duration metric: took 10.231634381s to wait for apiserver process to appear ...
	I0812 11:49:28.615043   57616 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:28.615063   57616 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8443/healthz ...
	I0812 11:49:28.620301   57616 api_server.go:279] https://192.168.61.148:8443/healthz returned 200:
	ok
	I0812 11:49:28.621814   57616 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:49:28.621843   57616 api_server.go:131] duration metric: took 6.792657ms to wait for apiserver health ...
	I0812 11:49:28.621858   57616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:28.804172   57616 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:28.804204   57616 system_pods.go:61] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:28.804208   57616 system_pods.go:61] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:28.804213   57616 system_pods.go:61] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:28.804216   57616 system_pods.go:61] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:28.804219   57616 system_pods.go:61] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:28.804224   57616 system_pods.go:61] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:28.804227   57616 system_pods.go:61] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:28.804232   57616 system_pods.go:61] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:28.804236   57616 system_pods.go:61] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:28.804244   57616 system_pods.go:74] duration metric: took 182.379622ms to wait for pod list to return data ...
	I0812 11:49:28.804251   57616 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:28.999537   57616 default_sa.go:45] found service account: "default"
	I0812 11:49:28.999571   57616 default_sa.go:55] duration metric: took 195.31354ms for default service account to be created ...
	I0812 11:49:28.999582   57616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:29.205266   57616 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:29.205296   57616 system_pods.go:89] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:29.205301   57616 system_pods.go:89] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:29.205306   57616 system_pods.go:89] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:29.205310   57616 system_pods.go:89] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:29.205315   57616 system_pods.go:89] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:29.205319   57616 system_pods.go:89] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:29.205323   57616 system_pods.go:89] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:29.205329   57616 system_pods.go:89] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:29.205335   57616 system_pods.go:89] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:29.205342   57616 system_pods.go:126] duration metric: took 205.754437ms to wait for k8s-apps to be running ...
	I0812 11:49:29.205348   57616 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:29.205390   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:29.220297   57616 system_svc.go:56] duration metric: took 14.940181ms WaitForService to wait for kubelet
	I0812 11:49:29.220343   57616 kubeadm.go:582] duration metric: took 10.836962086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:29.220369   57616 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:29.400598   57616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:29.400634   57616 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:29.400648   57616 node_conditions.go:105] duration metric: took 180.272764ms to run NodePressure ...
	I0812 11:49:29.400663   57616 start.go:241] waiting for startup goroutines ...
	I0812 11:49:29.400675   57616 start.go:246] waiting for cluster config update ...
	I0812 11:49:29.400691   57616 start.go:255] writing updated cluster config ...
	I0812 11:49:29.401086   57616 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:29.454975   57616 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:49:29.457349   57616 out.go:177] * Done! kubectl is now configured to use "no-preload-993542" cluster and "default" namespace by default
	I0812 11:49:28.394104   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:28.894284   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.393380   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.893417   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.394034   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.893668   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.394322   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.894069   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.393691   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.893944   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.517192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:33.393880   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:33.894126   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.393857   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.893356   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.394181   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.894116   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.393690   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.893650   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.394325   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.524187   56845 kubeadm.go:1113] duration metric: took 13.316085022s to wait for elevateKubeSystemPrivileges
	I0812 11:49:37.524225   56845 kubeadm.go:394] duration metric: took 5m12.500523071s to StartCluster
	I0812 11:49:37.524246   56845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.524334   56845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:37.526822   56845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.527125   56845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.191 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:37.527189   56845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:37.527272   56845 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-093615"
	I0812 11:49:37.527285   56845 addons.go:69] Setting default-storageclass=true in profile "embed-certs-093615"
	I0812 11:49:37.527307   56845 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-093615"
	I0812 11:49:37.527307   56845 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0812 11:49:37.527315   56845 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:37.527318   56845 addons.go:69] Setting metrics-server=true in profile "embed-certs-093615"
	I0812 11:49:37.527337   56845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-093615"
	I0812 11:49:37.527345   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527362   56845 addons.go:234] Setting addon metrics-server=true in "embed-certs-093615"
	W0812 11:49:37.527375   56845 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:37.527413   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527791   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527816   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527798   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527928   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.528806   56845 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:37.530366   56845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:37.544367   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0812 11:49:37.544919   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0812 11:49:37.545052   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545492   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545535   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.545551   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546095   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.546220   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.546247   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546267   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.547090   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.547667   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.547697   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.548008   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0812 11:49:37.550024   56845 addons.go:234] Setting addon default-storageclass=true in "embed-certs-093615"
	W0812 11:49:37.550048   56845 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:37.550079   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.550469   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.550500   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.550728   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.551342   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.551373   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.551748   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.552314   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.552354   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.566505   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0812 11:49:37.567085   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.567510   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.567526   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.567900   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.568133   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.570307   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.571789   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0812 11:49:37.572127   56845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:37.572191   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.572730   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.572752   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.573044   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0812 11:49:37.573231   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.573619   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.573815   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.573840   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.573849   56845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.573870   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:37.573890   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.574787   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.574809   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.575722   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.575937   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.578054   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578069   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.578536   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.578565   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578833   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.579012   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.579170   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.579326   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.580007   56845 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:37.581298   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:37.581313   56845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:37.581334   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.585114   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585809   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.585839   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585914   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.586160   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.586338   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.586476   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.591678   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0812 11:49:37.592146   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.592684   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.592702   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.593075   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.593241   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.595117   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.595398   56845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.595413   56845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:37.595430   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.598417   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.598771   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.598792   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.599008   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.599209   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.599369   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.599507   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.757714   56845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:37.783594   56845 node_ready.go:35] waiting up to 6m0s for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801679   56845 node_ready.go:49] node "embed-certs-093615" has status "Ready":"True"
	I0812 11:49:37.801707   56845 node_ready.go:38] duration metric: took 18.078817ms for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801719   56845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:37.814704   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:37.860064   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.913642   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:37.913673   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:37.932638   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.948027   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:37.948052   56845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:38.000773   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.000805   56845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:38.050478   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.655431   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655458   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655477   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655460   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655760   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655875   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655888   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655897   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655792   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655971   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655979   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655986   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655812   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.655832   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656156   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656161   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656172   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.656199   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656225   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656231   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707240   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.707268   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.707596   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.707618   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707667   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.832725   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.832758   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833072   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833114   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833134   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833155   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.833165   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833416   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833461   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833472   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833483   56845 addons.go:475] Verifying addon metrics-server=true in "embed-certs-093615"
	I0812 11:49:38.835319   56845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:34.589171   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:38.836977   56845 addons.go:510] duration metric: took 1.309786928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:39.827672   56845 pod_ready.go:102] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:40.820793   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.820818   56845 pod_ready.go:81] duration metric: took 3.006078866s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.820828   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825674   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.825696   56845 pod_ready.go:81] duration metric: took 4.862671ms for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825705   56845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830668   56845 pod_ready.go:92] pod "etcd-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.830690   56845 pod_ready.go:81] duration metric: took 4.979449ms for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830699   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834732   56845 pod_ready.go:92] pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.834750   56845 pod_ready.go:81] duration metric: took 4.044023ms for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834759   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838476   56845 pod_ready.go:92] pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.838493   56845 pod_ready.go:81] duration metric: took 3.728686ms for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838502   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219756   56845 pod_ready.go:92] pod "kube-proxy-26xvl" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.219778   56845 pod_ready.go:81] duration metric: took 381.271425ms for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219789   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619078   56845 pod_ready.go:92] pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.619107   56845 pod_ready.go:81] duration metric: took 399.30989ms for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619117   56845 pod_ready.go:38] duration metric: took 3.817386457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:41.619135   56845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:41.619197   56845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:41.634452   56845 api_server.go:72] duration metric: took 4.107285578s to wait for apiserver process to appear ...
	I0812 11:49:41.634480   56845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:41.634505   56845 api_server.go:253] Checking apiserver healthz at https://192.168.72.191:8443/healthz ...
	I0812 11:49:41.639610   56845 api_server.go:279] https://192.168.72.191:8443/healthz returned 200:
	ok
	I0812 11:49:41.640514   56845 api_server.go:141] control plane version: v1.30.3
	I0812 11:49:41.640537   56845 api_server.go:131] duration metric: took 6.049802ms to wait for apiserver health ...
	I0812 11:49:41.640547   56845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:41.823614   56845 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:41.823652   56845 system_pods.go:61] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:41.823659   56845 system_pods.go:61] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:41.823665   56845 system_pods.go:61] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:41.823670   56845 system_pods.go:61] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:41.823675   56845 system_pods.go:61] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:41.823680   56845 system_pods.go:61] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:41.823685   56845 system_pods.go:61] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:41.823693   56845 system_pods.go:61] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:41.823697   56845 system_pods.go:61] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:41.823704   56845 system_pods.go:74] duration metric: took 183.151482ms to wait for pod list to return data ...
	I0812 11:49:41.823711   56845 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:42.017840   56845 default_sa.go:45] found service account: "default"
	I0812 11:49:42.017870   56845 default_sa.go:55] duration metric: took 194.151916ms for default service account to be created ...
	I0812 11:49:42.017886   56845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:42.222050   56845 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:42.222084   56845 system_pods.go:89] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:42.222092   56845 system_pods.go:89] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:42.222098   56845 system_pods.go:89] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:42.222104   56845 system_pods.go:89] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:42.222110   56845 system_pods.go:89] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:42.222116   56845 system_pods.go:89] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:42.222122   56845 system_pods.go:89] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:42.222133   56845 system_pods.go:89] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:42.222140   56845 system_pods.go:89] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:42.222157   56845 system_pods.go:126] duration metric: took 204.263322ms to wait for k8s-apps to be running ...
	I0812 11:49:42.222169   56845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:42.222224   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:42.235891   56845 system_svc.go:56] duration metric: took 13.715083ms WaitForService to wait for kubelet
	I0812 11:49:42.235920   56845 kubeadm.go:582] duration metric: took 4.708757648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:42.235945   56845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:42.418727   56845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:42.418761   56845 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:42.418773   56845 node_conditions.go:105] duration metric: took 182.823582ms to run NodePressure ...
	I0812 11:49:42.418789   56845 start.go:241] waiting for startup goroutines ...
	I0812 11:49:42.418799   56845 start.go:246] waiting for cluster config update ...
	I0812 11:49:42.418812   56845 start.go:255] writing updated cluster config ...
	I0812 11:49:42.419150   56845 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:42.468981   56845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:49:42.471931   56845 out.go:177] * Done! kubectl is now configured to use "embed-certs-093615" cluster and "default" namespace by default
	I0812 11:49:40.669207   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:43.741090   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:49.821138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:52.893281   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:58.973141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:02.045165   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:08.129133   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:11.197137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:17.277119   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:20.349149   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:26.429100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:29.501158   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:35.581137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:38.653143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:44.733130   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:47.805192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:53.885100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:56.957154   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:03.037201   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:06.109079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:12.189138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:15.261132   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 
	I0812 11:51:21.341126   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:24.413107   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:30.493143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:33.569122   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:36.569554   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:51:36.569591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.569943   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:51:36.569973   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.570201   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:51:36.571680   59908 machine.go:97] duration metric: took 4m37.426765365s to provisionDockerMachine
	I0812 11:51:36.571724   59908 fix.go:56] duration metric: took 4m37.448153773s for fixHost
	I0812 11:51:36.571736   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 4m37.448177825s
	W0812 11:51:36.571759   59908 start.go:714] error starting host: provision: host is not running
	W0812 11:51:36.571863   59908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0812 11:51:36.571879   59908 start.go:729] Will try again in 5 seconds ...
	I0812 11:51:41.573924   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:51:41.574052   59908 start.go:364] duration metric: took 85.852µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:51:41.574082   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:51:41.574092   59908 fix.go:54] fixHost starting: 
	I0812 11:51:41.574362   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:51:41.574405   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:51:41.589947   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0812 11:51:41.590440   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:51:41.590917   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:51:41.590937   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:51:41.591264   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:51:41.591434   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:51:41.591577   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:51:41.593079   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Stopped err=<nil>
	I0812 11:51:41.593104   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	W0812 11:51:41.593250   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:51:41.595246   59908 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-581883" ...
	I0812 11:51:41.596770   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Start
	I0812 11:51:41.596979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring networks are active...
	I0812 11:51:41.598006   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network default is active
	I0812 11:51:41.598500   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network mk-default-k8s-diff-port-581883 is active
	I0812 11:51:41.598920   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Getting domain xml...
	I0812 11:51:41.599684   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Creating domain...
	I0812 11:51:42.863317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting to get IP...
	I0812 11:51:42.864358   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864816   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864907   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:42.864802   61181 retry.go:31] will retry after 220.174363ms: waiting for machine to come up
	I0812 11:51:43.086204   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086832   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086861   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.086783   61181 retry.go:31] will retry after 342.897936ms: waiting for machine to come up
	I0812 11:51:43.431059   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431584   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.431497   61181 retry.go:31] will retry after 465.154278ms: waiting for machine to come up
	I0812 11:51:43.898042   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898580   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898604   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.898518   61181 retry.go:31] will retry after 498.287765ms: waiting for machine to come up
	I0812 11:51:44.398086   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398736   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398763   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:44.398682   61181 retry.go:31] will retry after 617.809106ms: waiting for machine to come up
	I0812 11:51:45.018733   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019307   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.019217   61181 retry.go:31] will retry after 864.46319ms: waiting for machine to come up
	I0812 11:51:45.885081   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885555   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885585   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.885529   61181 retry.go:31] will retry after 1.067767105s: waiting for machine to come up
	I0812 11:51:46.954710   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955061   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955087   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:46.955020   61181 retry.go:31] will retry after 927.472236ms: waiting for machine to come up
	I0812 11:51:47.883766   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:47.884146   61181 retry.go:31] will retry after 1.493170608s: waiting for machine to come up
	I0812 11:51:49.378898   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379350   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:49.379297   61181 retry.go:31] will retry after 1.599397392s: waiting for machine to come up
	I0812 11:51:50.981013   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981714   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981745   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:50.981642   61181 retry.go:31] will retry after 1.779019847s: waiting for machine to come up
	I0812 11:51:52.762246   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762707   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:52.762629   61181 retry.go:31] will retry after 3.410620248s: waiting for machine to come up
	I0812 11:51:56.175010   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175542   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175573   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:56.175490   61181 retry.go:31] will retry after 3.890343984s: waiting for machine to come up
	I0812 11:52:00.069904   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has current primary IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070606   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Found IP for machine: 192.168.50.114
	I0812 11:52:00.070616   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserving static IP address...
	I0812 11:52:00.071153   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserved static IP address: 192.168.50.114
	I0812 11:52:00.071183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for SSH to be available...
	I0812 11:52:00.071206   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.071228   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | skip adding static IP to network mk-default-k8s-diff-port-581883 - found existing host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"}
	I0812 11:52:00.071242   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Getting to WaitForSSH function...
	I0812 11:52:00.073315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073647   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.073676   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073838   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH client type: external
	I0812 11:52:00.073868   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa (-rw-------)
	I0812 11:52:00.073909   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:52:00.073926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | About to run SSH command:
	I0812 11:52:00.073941   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | exit 0
	I0812 11:52:00.201064   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | SSH cmd err, output: <nil>: 
	I0812 11:52:00.201417   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetConfigRaw
	I0812 11:52:00.202026   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.204566   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.204855   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.204895   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.205179   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:52:00.205369   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:52:00.205387   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:00.205698   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.208214   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.208656   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208749   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.208932   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209111   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209227   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.209359   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.209519   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.209529   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:52:00.317075   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 11:52:00.317106   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317394   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:52:00.317427   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317617   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.320809   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321256   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.321297   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321415   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.321625   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321793   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321927   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.322174   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.322337   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.322350   59908 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-581883 && echo "default-k8s-diff-port-581883" | sudo tee /etc/hostname
	I0812 11:52:00.448512   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-581883
	
	I0812 11:52:00.448544   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.451372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.451915   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.451942   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.452144   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.452341   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452510   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452661   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.452823   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.453021   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.453038   59908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-581883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-581883/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-581883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:52:00.569754   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:52:00.569791   59908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:52:00.569808   59908 buildroot.go:174] setting up certificates
	I0812 11:52:00.569818   59908 provision.go:84] configureAuth start
	I0812 11:52:00.569829   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.570114   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.572834   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.573357   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.576212   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.576717   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576915   59908 provision.go:143] copyHostCerts
	I0812 11:52:00.576979   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:52:00.576989   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:52:00.577051   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:52:00.577148   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:52:00.577157   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:52:00.577184   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:52:00.577241   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:52:00.577248   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:52:00.577270   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:52:00.577366   59908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-581883 san=[127.0.0.1 192.168.50.114 default-k8s-diff-port-581883 localhost minikube]
	I0812 11:52:01.053674   59908 provision.go:177] copyRemoteCerts
	I0812 11:52:01.053733   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:52:01.053756   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.056305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.056840   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.056894   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.057105   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.057325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.057486   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.057641   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.142765   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0812 11:52:01.168430   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:52:01.193360   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:52:01.218125   59908 provision.go:87] duration metric: took 648.29686ms to configureAuth
	I0812 11:52:01.218151   59908 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:52:01.218337   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:01.218432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.221497   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.221858   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.221887   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.222077   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.222261   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222436   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222596   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.222736   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.222963   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.222986   59908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:52:01.490986   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:52:01.491013   59908 machine.go:97] duration metric: took 1.285630113s to provisionDockerMachine
	I0812 11:52:01.491026   59908 start.go:293] postStartSetup for "default-k8s-diff-port-581883" (driver="kvm2")
	I0812 11:52:01.491038   59908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:52:01.491054   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.491385   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:52:01.491414   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.494451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.494830   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.494881   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.495025   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.495216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.495372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.495522   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.579756   59908 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:52:01.583802   59908 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:52:01.583828   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:52:01.583952   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:52:01.584051   59908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:52:01.584167   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:52:01.593940   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:01.619301   59908 start.go:296] duration metric: took 128.258855ms for postStartSetup
	I0812 11:52:01.619343   59908 fix.go:56] duration metric: took 20.045251384s for fixHost
	I0812 11:52:01.619365   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.622507   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.622917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.622954   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.623116   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.623308   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623461   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.623803   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.624015   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.624031   59908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:52:01.733552   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723463521.708750952
	
	I0812 11:52:01.733588   59908 fix.go:216] guest clock: 1723463521.708750952
	I0812 11:52:01.733613   59908 fix.go:229] Guest: 2024-08-12 11:52:01.708750952 +0000 UTC Remote: 2024-08-12 11:52:01.619347823 +0000 UTC m=+302.640031526 (delta=89.403129ms)
	I0812 11:52:01.733639   59908 fix.go:200] guest clock delta is within tolerance: 89.403129ms
	I0812 11:52:01.733646   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 20.15958144s
	I0812 11:52:01.733673   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.733971   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:01.736957   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737359   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.737388   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738113   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738404   59908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:52:01.738444   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.738710   59908 ssh_runner.go:195] Run: cat /version.json
	I0812 11:52:01.738746   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.741424   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741906   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.741935   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742092   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742293   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742487   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742501   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742693   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742709   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.742854   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.821742   59908 ssh_runner.go:195] Run: systemctl --version
	I0812 11:52:01.854649   59908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:52:01.994050   59908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:52:02.000754   59908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:52:02.000848   59908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:52:02.017212   59908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:52:02.017240   59908 start.go:495] detecting cgroup driver to use...
	I0812 11:52:02.017310   59908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:52:02.035650   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:52:02.050036   59908 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:52:02.050114   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:52:02.063916   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:52:02.078938   59908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:52:02.194945   59908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:52:02.366538   59908 docker.go:233] disabling docker service ...
	I0812 11:52:02.366616   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:52:02.380648   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:52:02.393284   59908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:52:02.513560   59908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:52:02.638028   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:52:02.662395   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:52:02.683732   59908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:52:02.683798   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.695379   59908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:52:02.695437   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.706905   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.718338   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.729708   59908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:52:02.740127   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.750198   59908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.766470   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.777845   59908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:52:02.788254   59908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:52:02.788322   59908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:52:02.800552   59908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:52:02.809932   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:02.950568   59908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:52:03.087957   59908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:52:03.088031   59908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:52:03.094543   59908 start.go:563] Will wait 60s for crictl version
	I0812 11:52:03.094597   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:52:03.098447   59908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:52:03.139477   59908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:52:03.139561   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.169931   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.202808   59908 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:52:03.203979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:03.206641   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207046   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:03.207078   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207300   59908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 11:52:03.211169   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:03.222676   59908 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:52:03.222798   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:52:03.222835   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:03.258003   59908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 11:52:03.258074   59908 ssh_runner.go:195] Run: which lz4
	I0812 11:52:03.261945   59908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 11:52:03.266002   59908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:52:03.266035   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 11:52:04.616538   59908 crio.go:462] duration metric: took 1.354621946s to copy over tarball
	I0812 11:52:04.616600   59908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:52:06.801880   59908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.185257635s)
	I0812 11:52:06.801905   59908 crio.go:469] duration metric: took 2.18534207s to extract the tarball
	I0812 11:52:06.801912   59908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:52:06.840167   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:06.887647   59908 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:52:06.887669   59908 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:52:06.887677   59908 kubeadm.go:934] updating node { 192.168.50.114 8444 v1.30.3 crio true true} ...
	I0812 11:52:06.887780   59908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-581883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:52:06.887863   59908 ssh_runner.go:195] Run: crio config
	I0812 11:52:06.944347   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:06.944372   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:06.944385   59908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:52:06.944404   59908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-581883 NodeName:default-k8s-diff-port-581883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:52:06.944582   59908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-581883"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:52:06.944660   59908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:52:06.954792   59908 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:52:06.954853   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:52:06.964625   59908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0812 11:52:06.981467   59908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:52:06.998649   59908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0812 11:52:07.017062   59908 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0812 11:52:07.020710   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:07.033442   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:07.164673   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:07.183526   59908 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883 for IP: 192.168.50.114
	I0812 11:52:07.183574   59908 certs.go:194] generating shared ca certs ...
	I0812 11:52:07.183598   59908 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:07.183769   59908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:52:07.183813   59908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:52:07.183827   59908 certs.go:256] generating profile certs ...
	I0812 11:52:07.183948   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/client.key
	I0812 11:52:07.184117   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key.ebc625f3
	I0812 11:52:07.184198   59908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key
	I0812 11:52:07.184361   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:52:07.184402   59908 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:52:07.184416   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:52:07.184448   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:52:07.184478   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:52:07.184509   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:52:07.184562   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:07.185388   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:52:07.217465   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:52:07.248781   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:52:07.278177   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:52:07.313023   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 11:52:07.336720   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:52:07.360266   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:52:07.388850   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:52:07.413532   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:52:07.438304   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:52:07.462084   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:52:07.486176   59908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:52:07.504165   59908 ssh_runner.go:195] Run: openssl version
	I0812 11:52:07.510273   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:52:07.520671   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525096   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525158   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.531038   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:52:07.542971   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:52:07.554939   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559868   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559928   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.565655   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:52:07.578139   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:52:07.589333   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594679   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594755   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.600616   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:52:07.612028   59908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:52:07.617247   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:52:07.623826   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:52:07.630443   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:52:07.637184   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:52:07.643723   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:52:07.650269   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:52:07.657049   59908 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:52:07.657136   59908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:52:07.657218   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.695064   59908 cri.go:89] found id: ""
	I0812 11:52:07.695136   59908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:52:07.705707   59908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 11:52:07.705725   59908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 11:52:07.705781   59908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 11:52:07.715748   59908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:52:07.717230   59908 kubeconfig.go:125] found "default-k8s-diff-port-581883" server: "https://192.168.50.114:8444"
	I0812 11:52:07.720217   59908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 11:52:07.730557   59908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.114
	I0812 11:52:07.730596   59908 kubeadm.go:1160] stopping kube-system containers ...
	I0812 11:52:07.730609   59908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 11:52:07.730672   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.766039   59908 cri.go:89] found id: ""
	I0812 11:52:07.766114   59908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 11:52:07.784359   59908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:52:07.794750   59908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:52:07.794781   59908 kubeadm.go:157] found existing configuration files:
	
	I0812 11:52:07.794957   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0812 11:52:07.805063   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:52:07.805137   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:52:07.815283   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0812 11:52:07.825460   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:52:07.825535   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:52:07.836322   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.846381   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:52:07.846438   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.856471   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0812 11:52:07.866349   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:52:07.866415   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:52:07.876379   59908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:52:07.886723   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:07.993071   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.756027   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.978821   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.048377   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.146562   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:52:09.146658   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:09.647073   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.147700   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.647212   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.147702   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.174640   59908 api_server.go:72] duration metric: took 2.028079757s to wait for apiserver process to appear ...
	I0812 11:52:11.174665   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:52:11.174698   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:11.175152   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:11.674838   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:16.675764   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:16.675832   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:21.676084   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:21.676129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:26.676483   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:26.676531   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.676994   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:31.677032   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.841007   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": read tcp 192.168.50.1:45150->192.168.50.114:8444: read: connection reset by peer
	I0812 11:52:32.175501   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:32.176109   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:32.675714   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:37.676528   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:37.676575   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:42.677744   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:42.677782   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:47.679062   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:47.679139   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.075690   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.075722   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.075736   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.231100   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.231129   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.231143   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.273525   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.273564   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:50.675005   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.681580   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.681621   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.175129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.188048   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.188075   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.675218   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.684784   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.684822   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.175465   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.179666   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.179686   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.675234   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.680948   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.680972   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.175533   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.180849   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.180889   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.675084   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.680320   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.680352   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.175057   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.180061   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.180087   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.675117   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.679922   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.679950   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.175569   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.179883   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:55.179908   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.675522   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.680182   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:52:55.686477   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:52:55.686505   59908 api_server.go:131] duration metric: took 44.511833813s to wait for apiserver health ...
	I0812 11:52:55.686513   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:55.686519   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:55.688415   59908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:52:55.689745   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:52:55.700910   59908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:52:55.719588   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:52:55.729581   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:52:55.729622   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 11:52:55.729630   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:52:55.729640   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 11:52:55.729651   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 11:52:55.729662   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0812 11:52:55.729673   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:52:55.729682   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:52:55.729693   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 11:52:55.729702   59908 system_pods.go:74] duration metric: took 10.095218ms to wait for pod list to return data ...
	I0812 11:52:55.729712   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:52:55.733812   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:52:55.733841   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:52:55.733857   59908 node_conditions.go:105] duration metric: took 4.136436ms to run NodePressure ...
	I0812 11:52:55.733877   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:56.014193   59908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026600   59908 kubeadm.go:739] kubelet initialised
	I0812 11:52:56.026629   59908 kubeadm.go:740] duration metric: took 12.405458ms waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026637   59908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:56.031669   59908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.042499   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042526   59908 pod_ready.go:81] duration metric: took 10.82967ms for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.042537   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042547   59908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.048265   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048290   59908 pod_ready.go:81] duration metric: took 5.732651ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.048307   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048315   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.054613   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054639   59908 pod_ready.go:81] duration metric: took 6.314697ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.054652   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054660   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.125380   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125418   59908 pod_ready.go:81] duration metric: took 70.74807ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.125433   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125441   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.523216   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523251   59908 pod_ready.go:81] duration metric: took 397.801141ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.523263   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523272   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.923229   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923269   59908 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.923285   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923295   59908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:57.323846   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323877   59908 pod_ready.go:81] duration metric: took 400.572011ms for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:57.323888   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323896   59908 pod_ready.go:38] duration metric: took 1.297248784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:57.323911   59908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:52:57.336325   59908 ops.go:34] apiserver oom_adj: -16
	I0812 11:52:57.336345   59908 kubeadm.go:597] duration metric: took 49.630615077s to restartPrimaryControlPlane
	I0812 11:52:57.336365   59908 kubeadm.go:394] duration metric: took 49.67932273s to StartCluster
	I0812 11:52:57.336380   59908 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.336447   59908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:52:57.338064   59908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.338331   59908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:52:57.338433   59908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:52:57.338521   59908 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338536   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:57.338551   59908 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338587   59908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-581883"
	I0812 11:52:57.338558   59908 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338662   59908 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:52:57.338695   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.338563   59908 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338755   59908 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338764   59908 addons.go:243] addon metrics-server should already be in state true
	I0812 11:52:57.338788   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.339032   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339033   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339035   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339067   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339084   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339065   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.340300   59908 out.go:177] * Verifying Kubernetes components...
	I0812 11:52:57.342119   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:57.356069   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0812 11:52:57.356172   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0812 11:52:57.356610   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.356723   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.357168   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357189   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357329   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357356   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357543   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.357718   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.358105   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358143   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.358331   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358367   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.360134   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 11:52:57.360536   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.361016   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.361041   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.361371   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.361569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.365260   59908 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.365279   59908 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:52:57.365312   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.365596   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.365639   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.377488   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0812 11:52:57.378076   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.378581   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0812 11:52:57.378657   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.378680   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.378965   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.379025   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.379251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.379656   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.379683   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.380105   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.380391   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.382273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.382496   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.383601   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0812 11:52:57.384062   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.384739   59908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:52:57.384750   59908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:52:57.384914   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.384940   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.385293   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.385956   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.386002   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.386314   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:52:57.386336   59908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:52:57.386355   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.386386   59908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.386398   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:52:57.386416   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.390135   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390335   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390669   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.390729   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391187   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.391251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391393   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391571   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391592   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391722   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.391758   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391921   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.431097   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0812 11:52:57.431600   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.432116   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.432140   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.432506   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.432702   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.434513   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.434753   59908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.434772   59908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:52:57.434791   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.438433   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.438917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.438951   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.439150   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.439384   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.439574   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.439744   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.547325   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:57.566163   59908 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:52:57.633469   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.641330   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:52:57.641355   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:52:57.662909   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.691294   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:52:57.691321   59908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:52:57.746668   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:57.746693   59908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:52:57.787970   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628134   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628195   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628464   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628481   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628490   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628498   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628611   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628626   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628647   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628651   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628775   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628785   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628791   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.630407   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.630424   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.634739   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.634759   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.635034   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.635053   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643171   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643484   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643502   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643511   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643520   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643532   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643732   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643754   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643762   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643771   59908 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-581883"
	I0812 11:52:58.645811   59908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:52:58.647443   59908 addons.go:510] duration metric: took 1.309010451s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:52:59.569732   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:01.570136   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:04.069965   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:05.570009   59908 node_ready.go:49] node "default-k8s-diff-port-581883" has status "Ready":"True"
	I0812 11:53:05.570039   59908 node_ready.go:38] duration metric: took 8.003840242s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:53:05.570050   59908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:53:05.577206   59908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:07.584071   59908 pod_ready.go:102] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:08.583523   59908 pod_ready.go:92] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.583550   59908 pod_ready.go:81] duration metric: took 3.006317399s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.583559   59908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589137   59908 pod_ready.go:92] pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.589163   59908 pod_ready.go:81] duration metric: took 5.595854ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589175   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593746   59908 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.593767   59908 pod_ready.go:81] duration metric: took 4.585829ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593776   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598058   59908 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.598078   59908 pod_ready.go:81] duration metric: took 4.296254ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598087   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603106   59908 pod_ready.go:92] pod "kube-proxy-h6fzz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.603127   59908 pod_ready.go:81] duration metric: took 5.033938ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603136   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981404   59908 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.981429   59908 pod_ready.go:81] duration metric: took 378.286388ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981439   59908 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:10.988175   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:13.488230   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:15.987639   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:18.487540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:20.490803   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:22.987167   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:25.488840   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:27.988661   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:30.487605   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:32.487748   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:34.488109   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:36.987016   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:38.987165   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:40.989187   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:43.487407   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:45.487714   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:47.487961   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:49.988540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:52.487216   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:54.487433   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:56.487958   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:58.489095   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:00.987353   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:02.989138   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:05.488174   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:07.988702   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:10.488396   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:12.988099   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:14.988220   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:16.988395   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:19.491228   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:21.987397   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:23.987898   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:26.487993   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:28.489384   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:30.989371   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:33.488670   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:35.987526   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:37.988823   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:40.488488   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:42.488612   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:44.989023   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:46.990079   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:49.488206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:51.488446   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:53.988007   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:56.488200   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:58.490348   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:00.988756   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:03.487527   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:05.987624   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:07.989990   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:10.487888   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:12.488656   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:14.489648   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:16.988551   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:19.488408   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:21.988902   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:24.487895   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:26.988377   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:29.488082   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:31.986995   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:33.987359   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:35.989125   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:38.489945   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:40.493189   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:42.988399   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:45.487307   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:47.487758   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:49.487798   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:51.987795   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:53.988376   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:55.990060   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:58.487684   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:00.487893   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:02.988185   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:04.988436   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:07.487867   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:09.987976   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:11.988078   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:13.988354   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:15.988676   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:18.488658   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:20.987780   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:23.486965   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:25.487065   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:27.487891   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:29.488825   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:31.988732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:34.487771   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:36.988555   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:39.489154   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:41.987687   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:43.990010   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:45.991210   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:48.487381   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:50.987943   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:53.487657   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:55.987206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:57.988164   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:59.990098   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:02.486732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:04.488492   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:06.987443   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988727   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988756   59908 pod_ready.go:81] duration metric: took 4m0.007310185s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:57:08.988768   59908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0812 11:57:08.988777   59908 pod_ready.go:38] duration metric: took 4m3.418715457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:57:08.988795   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:57:08.988823   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:08.988909   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:09.035203   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.035230   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.035236   59908 cri.go:89] found id: ""
	I0812 11:57:09.035244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:09.035298   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.039940   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.044354   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:09.044430   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:09.079692   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:09.079716   59908 cri.go:89] found id: ""
	I0812 11:57:09.079725   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:09.079788   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.084499   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:09.084576   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:09.124721   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:09.124750   59908 cri.go:89] found id: ""
	I0812 11:57:09.124761   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:09.124828   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.128921   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:09.128997   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:09.164960   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.164982   59908 cri.go:89] found id: ""
	I0812 11:57:09.164995   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:09.165046   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.169043   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:09.169116   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:09.211298   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.211322   59908 cri.go:89] found id: ""
	I0812 11:57:09.211329   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:09.211377   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.215348   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:09.215440   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:09.269500   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.269519   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.269523   59908 cri.go:89] found id: ""
	I0812 11:57:09.269530   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:09.269575   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.273724   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.277660   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:09.277732   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:09.327668   59908 cri.go:89] found id: ""
	I0812 11:57:09.327691   59908 logs.go:276] 0 containers: []
	W0812 11:57:09.327698   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:09.327703   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:09.327765   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:09.363936   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.363957   59908 cri.go:89] found id: ""
	I0812 11:57:09.363964   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:09.364010   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.368123   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:09.368151   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:09.441676   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:09.441725   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.483275   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:09.483317   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.544504   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:09.544539   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.594808   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:09.594839   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.636141   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:09.636178   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.673996   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:09.674023   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.711480   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:09.711504   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.747830   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:09.747861   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:10.268559   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:10.268607   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:10.394461   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:10.394495   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:10.439760   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:10.439796   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:10.474457   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:10.474496   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:10.515430   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:10.515464   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.029229   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:57:13.045764   59908 api_server.go:72] duration metric: took 4m15.707395821s to wait for apiserver process to appear ...
	I0812 11:57:13.045795   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:57:13.045832   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:13.045878   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:13.082792   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:13.082818   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.082824   59908 cri.go:89] found id: ""
	I0812 11:57:13.082833   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:13.082893   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.087987   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.092188   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:13.092251   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:13.135193   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:13.135226   59908 cri.go:89] found id: ""
	I0812 11:57:13.135237   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:13.135293   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.140269   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:13.140344   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:13.193436   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.193458   59908 cri.go:89] found id: ""
	I0812 11:57:13.193465   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:13.193539   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.198507   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:13.198589   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:13.241696   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:13.241718   59908 cri.go:89] found id: ""
	I0812 11:57:13.241725   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:13.241773   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.246865   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:13.246937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:13.293284   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.293308   59908 cri.go:89] found id: ""
	I0812 11:57:13.293315   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:13.293380   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.297698   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:13.297772   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:13.342737   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.342757   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:13.342760   59908 cri.go:89] found id: ""
	I0812 11:57:13.342767   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:13.342809   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.347634   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.351733   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:13.351794   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:13.394540   59908 cri.go:89] found id: ""
	I0812 11:57:13.394570   59908 logs.go:276] 0 containers: []
	W0812 11:57:13.394580   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:13.394594   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:13.394647   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:13.433910   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:13.433934   59908 cri.go:89] found id: ""
	I0812 11:57:13.433944   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:13.434001   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.437999   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:13.438024   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.451945   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:13.451973   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:13.561957   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:13.561990   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.602729   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:13.602754   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:13.673729   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:13.673766   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.714814   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:13.714843   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.755876   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:13.755902   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.814263   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:13.814301   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:14.305206   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:14.305243   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:14.349455   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:14.349486   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:14.399731   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:14.399765   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:14.443494   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:14.443524   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:14.486034   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:14.486070   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:14.524991   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:14.525018   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.062314   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:57:17.068363   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:57:17.069818   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:57:17.069845   59908 api_server.go:131] duration metric: took 4.024042567s to wait for apiserver health ...
	I0812 11:57:17.069856   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:57:17.069882   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:17.069937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:17.107213   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:17.107233   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.107237   59908 cri.go:89] found id: ""
	I0812 11:57:17.107244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:17.107297   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.117678   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.121897   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:17.121962   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:17.159450   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.159480   59908 cri.go:89] found id: ""
	I0812 11:57:17.159489   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:17.159548   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.164078   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:17.164156   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:17.207977   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.208002   59908 cri.go:89] found id: ""
	I0812 11:57:17.208010   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:17.208063   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.212055   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:17.212136   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:17.259289   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.259316   59908 cri.go:89] found id: ""
	I0812 11:57:17.259327   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:17.259393   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.263818   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:17.263896   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:17.301371   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.301404   59908 cri.go:89] found id: ""
	I0812 11:57:17.301413   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:17.301473   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.306038   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:17.306100   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:17.343982   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.344006   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.344017   59908 cri.go:89] found id: ""
	I0812 11:57:17.344027   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:17.344086   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.348135   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.352720   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:17.352790   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:17.392647   59908 cri.go:89] found id: ""
	I0812 11:57:17.392673   59908 logs.go:276] 0 containers: []
	W0812 11:57:17.392682   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:17.392687   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:17.392740   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:17.429067   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.429088   59908 cri.go:89] found id: ""
	I0812 11:57:17.429095   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:17.429140   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.433406   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:17.433433   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.479091   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:17.479123   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:17.519579   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:17.519614   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:17.620109   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:17.620143   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.659604   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:17.659639   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.712850   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:17.712901   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.750567   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:17.750595   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:17.822429   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:17.822459   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.864303   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:17.864338   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.904307   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:17.904340   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.939073   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:17.939103   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.982222   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:17.982253   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:18.369007   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:18.369053   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:18.385187   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:18.385219   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:20.949075   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:57:20.949110   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.949115   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.949119   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.949122   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.949125   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.949128   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.949133   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.949139   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.949146   59908 system_pods.go:74] duration metric: took 3.879283024s to wait for pod list to return data ...
	I0812 11:57:20.949153   59908 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:57:20.951355   59908 default_sa.go:45] found service account: "default"
	I0812 11:57:20.951376   59908 default_sa.go:55] duration metric: took 2.217928ms for default service account to be created ...
	I0812 11:57:20.951383   59908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:57:20.956479   59908 system_pods.go:86] 8 kube-system pods found
	I0812 11:57:20.956505   59908 system_pods.go:89] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.956513   59908 system_pods.go:89] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.956519   59908 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.956527   59908 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.956532   59908 system_pods.go:89] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.956537   59908 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.956546   59908 system_pods.go:89] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.956553   59908 system_pods.go:89] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.956564   59908 system_pods.go:126] duration metric: took 5.175002ms to wait for k8s-apps to be running ...
	I0812 11:57:20.956572   59908 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:57:20.956624   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:57:20.971826   59908 system_svc.go:56] duration metric: took 15.246626ms WaitForService to wait for kubelet
	I0812 11:57:20.971856   59908 kubeadm.go:582] duration metric: took 4m23.633490244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:57:20.971881   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:57:20.974643   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:57:20.974661   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:57:20.974671   59908 node_conditions.go:105] duration metric: took 2.785ms to run NodePressure ...
	I0812 11:57:20.974681   59908 start.go:241] waiting for startup goroutines ...
	I0812 11:57:20.974688   59908 start.go:246] waiting for cluster config update ...
	I0812 11:57:20.974700   59908 start.go:255] writing updated cluster config ...
	I0812 11:57:20.975043   59908 ssh_runner.go:195] Run: rm -f paused
	I0812 11:57:21.025000   59908 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:57:21.028153   59908 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-581883" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.784292403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9736a6c-0262-4cc7-8a17-03d1c39741cc name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.785625042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b33d4fc-12a3-4810-a3cf-8fc3d378c70b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.786121344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463910786095036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b33d4fc-12a3-4810-a3cf-8fc3d378c70b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.786712964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=614a964b-76e2-4e54-89b4-cf0de8092ad0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.786826143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=614a964b-76e2-4e54-89b4-cf0de8092ad0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.787047080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=614a964b-76e2-4e54-89b4-cf0de8092ad0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.816995160Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c83156a-35e2-4878-9b9e-89cdd29e9fdc name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.817243182Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3b055f30f04343bec17f4368ad44004d5057dd41b9df819c4532ad6362478766,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-25zg8,Uid:70d17780-d4bc-4df4-93ac-bb74c1fa50f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463359786801832,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-25zg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70d17780-d4bc-4df4-93ac-bb74c1fa50f3,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T11:49:19.479527302Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-shfmr,Uid:6fd90de8-af9e-4b43-9fa7-b503a00e98
45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463359750491101,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T11:49:17.935680469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-2gc2z,Uid:4d5375c0-6f19-40b7-98bc-50d4ef45fd93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463359726603462,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-08-12T11:49:17.917546943Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:beb7a321-e575-44e5-8d10-3749d1285806,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463359615885782,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T11:49:19.310261106Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-8jwkz,Uid:43501e17-fde3-4468-a170-e64a58088ec2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463358185436030,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T11:49:17.873836705Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-993542,Uid:3371f860ec69a456c5c6ca316a385978,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723463347198811849,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.148:8443,kubernetes.io/config.hash: 3371f860ec69a456c5c6ca316a385978,kubernetes.io/config.seen: 2024-08-12T11:49:06.749283049Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac89ea47168b24be940abc529730ba6
44e1b3be10336ccd3698ee9764a4b58a5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-993542,Uid:26ec55711ba1c1052321c141944ffc1d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463347195223173,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 26ec55711ba1c1052321c141944ffc1d,kubernetes.io/config.seen: 2024-08-12T11:49:06.749284972Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-993542,Uid:9d7d5f7c83169a839579d85d6294d868,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463347193263325,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.148:2379,kubernetes.io/config.hash: 9d7d5f7c83169a839579d85d6294d868,kubernetes.io/config.seen: 2024-08-12T11:49:06.749278496Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-993542,Uid:12c900825ef33ee78a93cbc9d9fb3045,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463347192162644,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 12c900825ef33ee78a93cbc9d9fb3045,kubernetes.io/config.seen: 2024-08-12T11:49:06.749284012Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9c83156a-35e2-4878-9b9e-89cdd29e9fdc name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.817871347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dace8ba-005a-48ab-93de-6692a334f86c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.817931300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dace8ba-005a-48ab-93de-6692a334f86c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.818116112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dace8ba-005a-48ab-93de-6692a334f86c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.824699822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b27cecc-8e6e-43d1-aa52-35e21bd53647 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.824832240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b27cecc-8e6e-43d1-aa52-35e21bd53647 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.825686280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e7b4c31-8d5f-4555-bf43-98b99c04d937 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.826212394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463910826189436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e7b4c31-8d5f-4555-bf43-98b99c04d937 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.827249529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=958c8fc7-684c-4054-b0ab-42dc11d628a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.827304072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=958c8fc7-684c-4054-b0ab-42dc11d628a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.827498435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=958c8fc7-684c-4054-b0ab-42dc11d628a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.867118420Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a0c9888-9515-4e82-8611-f0c6d19bfd5d name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.867200176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a0c9888-9515-4e82-8611-f0c6d19bfd5d name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.868152773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eabbd539-1ff6-4d23-9ed3-b9b7802ae36d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.868505351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463910868479456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eabbd539-1ff6-4d23-9ed3-b9b7802ae36d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.869040617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9552ccb-fcab-47df-8fd1-ac53e924ed11 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.869091497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9552ccb-fcab-47df-8fd1-ac53e924ed11 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:30 no-preload-993542 crio[730]: time="2024-08-12 11:58:30.869278357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9552ccb-fcab-47df-8fd1-ac53e924ed11 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a97afb2dea0bb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7616ad30a9581       coredns-6f6b679f8f-2gc2z
	e819e17e634c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   340c257cd6ea8       coredns-6f6b679f8f-shfmr
	22d4668d24d8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   31ebd1fd6c11c       storage-provisioner
	2dec1828ea63c       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   9 minutes ago       Running             kube-proxy                0                   737754dadaa68       kube-proxy-8jwkz
	2cb29c2ee9470       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 minutes ago       Running             kube-scheduler            2                   ac89ea47168b2       kube-scheduler-no-preload-993542
	cd414c1501d0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   a74ad84115895       etcd-no-preload-993542
	b651d9db6daec       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   9 minutes ago       Running             kube-controller-manager   2                   a0b2f0f3531d8       kube-controller-manager-no-preload-993542
	31ef15b39cb8c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   9 minutes ago       Running             kube-apiserver            2                   bc784b19036a8       kube-apiserver-no-preload-993542
	33cf9f4f3371c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 minutes ago      Exited              kube-apiserver            1                   2d2ddb348e067       kube-apiserver-no-preload-993542
	
	
	==> coredns [a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-993542
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-993542
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=no-preload-993542
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-993542
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:58:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:54:27 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:54:27 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:54:27 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:54:27 +0000   Mon, 12 Aug 2024 11:49:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.148
	  Hostname:    no-preload-993542
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 384be5da85b84567aeaffb21db9a0f6d
	  System UUID:                384be5da-85b8-4567-aeaf-fb21db9a0f6d
	  Boot ID:                    eee01779-c9d5-4d04-b9ff-057155f1346b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-2gc2z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m14s
	  kube-system                 coredns-6f6b679f8f-shfmr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m14s
	  kube-system                 etcd-no-preload-993542                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-993542             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-993542    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-8jwkz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-scheduler-no-preload-993542             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-6867b74b74-25zg8              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m12s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-993542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-993542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-993542 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node no-preload-993542 event: Registered Node no-preload-993542 in Controller
	  Normal  CIDRAssignmentFailed     9m15s  cidrAllocator    Node no-preload-993542 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.045444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942233] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.447595] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.463981] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.057589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053561] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.170335] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.149306] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.283613] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Aug12 11:44] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.066910] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.144836] systemd-fstab-generator[1431]: Ignoring "noauto" option for root device
	[  +2.962560] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.162305] kauditd_printk_skb: 53 callbacks suppressed
	[ +27.438139] kauditd_printk_skb: 30 callbacks suppressed
	[Aug12 11:49] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +0.063307] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.481610] systemd-fstab-generator[3416]: Ignoring "noauto" option for root device
	[  +0.080980] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.602216] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.230335] systemd-fstab-generator[3626]: Ignoring "noauto" option for root device
	[  +6.968861] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef] <==
	{"level":"info","ts":"2024-08-12T11:49:07.784211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cf942be0a1301ad","local-member-id":"d94a8047b7882d6e","added-peer-id":"d94a8047b7882d6e","added-peer-peer-urls":["https://192.168.61.148:2380"]}
	{"level":"info","ts":"2024-08-12T11:49:07.784259Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.148:2380"}
	{"level":"info","ts":"2024-08-12T11:49:08.127893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:08.128068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:08.128187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e received MsgPreVoteResp from d94a8047b7882d6e at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:08.128290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:08.128324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e received MsgVoteResp from d94a8047b7882d6e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:08.128424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became leader at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:08.128521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94a8047b7882d6e elected leader d94a8047b7882d6e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:08.133199Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.134035Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d94a8047b7882d6e","local-member-attributes":"{Name:no-preload-993542 ClientURLs:[https://192.168.61.148:2379]}","request-path":"/0/members/d94a8047b7882d6e/attributes","cluster-id":"8cf942be0a1301ad","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:49:08.134528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:08.135662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:08.137826Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:08.144791Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:08.138078Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf942be0a1301ad","local-member-id":"d94a8047b7882d6e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.145254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.142076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:49:08.149441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:49:08.149580Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:49:08.149690Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.151563Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.148:2379"}
	{"level":"info","ts":"2024-08-12T11:52:08.925719Z","caller":"traceutil/trace.go:171","msg":"trace[739248365] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"100.614189ms","start":"2024-08-12T11:52:08.825068Z","end":"2024-08-12T11:52:08.925682Z","steps":["trace[739248365] 'process raft request'  (duration: 100.467509ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:52:09.182351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.132026ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:52:09.182800Z","caller":"traceutil/trace.go:171","msg":"trace[1543864765] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:589; }","duration":"144.646059ms","start":"2024-08-12T11:52:09.038138Z","end":"2024-08-12T11:52:09.182784Z","steps":["trace[1543864765] 'range keys from in-memory index tree'  (duration: 144.108501ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:58:31 up 14 min,  0 users,  load average: 0.23, 0.26, 0.24
	Linux no-preload-993542 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0812 11:54:11.138142       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 11:54:11.138183       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0812 11:54:11.139221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 11:54:11.139404       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:55:11.140473       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 11:55:11.140688       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0812 11:55:11.140509       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 11:55:11.140822       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0812 11:55:11.141920       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 11:55:11.141953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:57:11.142464       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 11:57:11.142832       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0812 11:57:11.142463       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 11:57:11.143002       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0812 11:57:11.144091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 11:57:11.144133       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e] <==
	W0812 11:49:00.062662       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.066192       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.137063       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.140615       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.155410       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.203070       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.219463       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.227262       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.238919       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.278983       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.325014       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.344054       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.350662       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.414531       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.545172       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.552806       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.554141       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.610713       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.729365       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.776447       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.799810       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.902940       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.410968       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.521050       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.716304       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19] <==
	E0812 11:53:17.072002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:53:17.621104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:53:47.079508       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:53:47.629113       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:54:17.086944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:54:17.637574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 11:54:28.011286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-993542"
	E0812 11:54:47.093388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:54:47.645910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 11:55:13.836086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="387.719µs"
	E0812 11:55:17.099608       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:55:17.653792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 11:55:26.835966       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="74.507µs"
	E0812 11:55:47.106003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:55:47.663683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:56:17.113269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:56:17.671492       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:56:47.119263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:56:47.680268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:57:17.127004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:57:17.689242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:57:47.134148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:57:47.697759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:58:17.141541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:58:17.706340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0812 11:49:18.562600       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0812 11:49:18.576053       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.148"]
	E0812 11:49:18.576237       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0812 11:49:18.644020       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0812 11:49:18.644059       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:49:18.644092       1 server_linux.go:169] "Using iptables Proxier"
	I0812 11:49:18.649304       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0812 11:49:18.649663       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0812 11:49:18.649696       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:49:18.652665       1 config.go:197] "Starting service config controller"
	I0812 11:49:18.652714       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:49:18.652994       1 config.go:104] "Starting endpoint slice config controller"
	I0812 11:49:18.653050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:49:18.653793       1 config.go:326] "Starting node config controller"
	I0812 11:49:18.653819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:49:18.754855       1 shared_informer.go:320] Caches are synced for node config
	I0812 11:49:18.754864       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:49:18.754879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94] <==
	W0812 11:49:11.076585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:49:11.076679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.200879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.201097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.252278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:49:11.252874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.264232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:49:11.264302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.286784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.286969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.395233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.395340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.415170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 11:49:11.415409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.424998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:49:11.425320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.434155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.434367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.480989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 11:49:11.481119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.520811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 11:49:11.520955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.734874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:11.735009       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0812 11:49:14.850284       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 11:57:12 no-preload-993542 kubelet[3424]: E0812 11:57:12.971774    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463832971305729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:22 no-preload-993542 kubelet[3424]: E0812 11:57:22.973178    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463842972835380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:22 no-preload-993542 kubelet[3424]: E0812 11:57:22.973659    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463842972835380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:25 no-preload-993542 kubelet[3424]: E0812 11:57:25.817512    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 11:57:32 no-preload-993542 kubelet[3424]: E0812 11:57:32.976035    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463852975579577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:32 no-preload-993542 kubelet[3424]: E0812 11:57:32.976108    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463852975579577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:37 no-preload-993542 kubelet[3424]: E0812 11:57:37.818276    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 11:57:42 no-preload-993542 kubelet[3424]: E0812 11:57:42.978187    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463862977606197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:42 no-preload-993542 kubelet[3424]: E0812 11:57:42.978564    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463862977606197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:51 no-preload-993542 kubelet[3424]: E0812 11:57:51.819453    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 11:57:52 no-preload-993542 kubelet[3424]: E0812 11:57:52.980765    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463872980088603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:57:52 no-preload-993542 kubelet[3424]: E0812 11:57:52.981102    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463872980088603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:02 no-preload-993542 kubelet[3424]: E0812 11:58:02.983685    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463882982937147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:02 no-preload-993542 kubelet[3424]: E0812 11:58:02.984094    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463882982937147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:06 no-preload-993542 kubelet[3424]: E0812 11:58:06.817554    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]: E0812 11:58:12.834472    3424 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]: E0812 11:58:12.987039    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463892986337199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:12 no-preload-993542 kubelet[3424]: E0812 11:58:12.987211    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463892986337199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:21 no-preload-993542 kubelet[3424]: E0812 11:58:21.817896    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 11:58:22 no-preload-993542 kubelet[3424]: E0812 11:58:22.990035    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463902989420596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 11:58:22 no-preload-993542 kubelet[3424]: E0812 11:58:22.990077    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463902989420596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746] <==
	I0812 11:49:19.861795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:49:19.887511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:49:19.887584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:49:19.905072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:49:19.905873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a7fdbe9-19d1-4799-88b3-8c3f9b85e5b5", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d became leader
	I0812 11:49:19.905931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d!
	I0812 11:49:20.006073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-993542 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-25zg8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8: exit status 1 (62.591493ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-25zg8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0812 11:50:45.935443   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-093615 -n embed-certs-093615
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-12 11:58:42.974302916 +0000 UTC m=+5912.131225497
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-093615 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-093615 logs -n 25: (1.379246105s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:46:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:46:59.013199   59908 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:46:59.013476   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013486   59908 out.go:304] Setting ErrFile to fd 2...
	I0812 11:46:59.013490   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013689   59908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:46:59.014204   59908 out.go:298] Setting JSON to false
	I0812 11:46:59.015302   59908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5360,"bootTime":1723457859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:46:59.015368   59908 start.go:139] virtualization: kvm guest
	I0812 11:46:59.017512   59908 out.go:177] * [default-k8s-diff-port-581883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:46:59.018833   59908 notify.go:220] Checking for updates...
	I0812 11:46:59.018859   59908 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:46:59.020251   59908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:46:59.021646   59908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:46:59.022806   59908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:46:59.024110   59908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:46:59.025476   59908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:46:59.027290   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:46:59.027911   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.028000   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.042960   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0812 11:46:59.043506   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.044010   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.044038   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.044357   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.044528   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.044791   59908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:46:59.045201   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.045244   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.060824   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0812 11:46:59.061268   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.061747   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.061775   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.062156   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.062346   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.101403   59908 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:46:59.102677   59908 start.go:297] selected driver: kvm2
	I0812 11:46:59.102698   59908 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.102863   59908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:46:59.103621   59908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.103690   59908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:46:59.119409   59908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:46:59.119785   59908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:46:59.119848   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:46:59.119862   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:46:59.119900   59908 start.go:340] cluster config:
	{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.120006   59908 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.121814   59908 out.go:177] * Starting "default-k8s-diff-port-581883" primary control-plane node in "default-k8s-diff-port-581883" cluster
	I0812 11:46:59.123067   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:46:59.123111   59908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:46:59.123124   59908 cache.go:56] Caching tarball of preloaded images
	I0812 11:46:59.123213   59908 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:46:59.123228   59908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:46:59.123315   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:46:59.123508   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:46:59.123549   59908 start.go:364] duration metric: took 23.58µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:46:59.123562   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:46:59.123569   59908 fix.go:54] fixHost starting: 
	I0812 11:46:59.123822   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.123852   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.138741   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0812 11:46:59.139136   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.139611   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.139638   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.139938   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.140109   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.140220   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:46:59.141738   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Running err=<nil>
	W0812 11:46:59.141754   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:46:59.143728   59908 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-581883" VM ...
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:56.296673   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:58.297248   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.796584   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.182029   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:02.182240   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:59.144895   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:46:59.144926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.145161   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:46:59.147827   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148278   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:46:59.148305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:46:59.148645   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148820   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148953   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:46:59.149111   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:46:59.149331   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:46:59.149345   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:47:02.045251   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:03.294934   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.295154   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:04.183393   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:06.682752   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.121108   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:07.296302   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.296695   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.182760   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.682727   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.197110   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:11.795381   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:13.795932   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.183038   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:16.183071   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.273141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.297007   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.796402   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.186341   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.682850   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:22.683087   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.349117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:23.421182   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:21.295000   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:23.295712   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.296884   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.183687   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:27.683615   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:27.795828   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.796889   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.683959   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.183115   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.541112   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:47:32.295992   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.795262   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.183370   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:36.682661   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:35.613159   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:36.796467   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.295851   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.182790   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.183829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.693116   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:41.795257   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.795510   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.795595   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.681967   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.684043   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:44.765178   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:48.296050   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.796799   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:48.181748   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.182360   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:52.682975   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.845098   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.917138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.299038   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.796462   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.183044   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:57.685262   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:58.295509   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.795668   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.182427   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:02.682842   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:59.997094   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.069083   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.296463   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.795306   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.182884   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.682408   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.796147   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.296184   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.182124   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:12.182757   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:09.149157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.221135   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.296827   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.796551   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.682524   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:16.682657   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.301111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:17.295545   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:19.295850   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.688121   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.182277   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.373181   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:21.297142   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.798497   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.182636   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:25.682702   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.682936   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.453111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:26.295505   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:28.296105   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.796925   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:29.688759   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:32.182416   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.525184   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:33.295379   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:35.296605   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:34.183273   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.682829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.605187   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:37.796023   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:38.789570   57616 pod_ready.go:81] duration metric: took 4m0.000355544s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:38.789615   57616 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:38.789648   57616 pod_ready.go:38] duration metric: took 4m11.040926567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:38.789687   57616 kubeadm.go:597] duration metric: took 4m21.131138259s to restartPrimaryControlPlane
	W0812 11:48:38.789757   57616 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:38.789794   57616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:38.683163   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:40.683334   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:39.677106   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:43.182845   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:44.677001   56845 pod_ready.go:81] duration metric: took 4m0.0007218s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:44.677024   56845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:44.677041   56845 pod_ready.go:38] duration metric: took 4m12.037310023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:44.677065   56845 kubeadm.go:597] duration metric: took 4m19.591323336s to restartPrimaryControlPlane
	W0812 11:48:44.677114   56845 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:44.677137   56845 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:45.757157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:48.829146   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:54.909142   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:57.981079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:04.870417   57616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.080589185s)
	I0812 11:49:04.870490   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:04.897963   57616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:04.912211   57616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:04.933833   57616 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:04.933861   57616 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:04.933915   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:04.946673   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:04.946756   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:04.960851   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:04.989181   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:04.989259   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:05.002989   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.012600   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:05.012673   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.022301   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:05.031680   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:05.031761   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:05.041453   57616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:05.087039   57616 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:49:05.087106   57616 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:05.195646   57616 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:05.195788   57616 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:05.195909   57616 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:49:05.204565   57616 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:05.207373   57616 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:05.207481   57616 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:05.207573   57616 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:05.207696   57616 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:05.207792   57616 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:05.207896   57616 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:05.207995   57616 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:05.208103   57616 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:05.208195   57616 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:05.208296   57616 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:05.208401   57616 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:05.208456   57616 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:05.208531   57616 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:05.368644   57616 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:05.523403   57616 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:05.656177   57616 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:05.786141   57616 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:05.945607   57616 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:05.946201   57616 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:05.948940   57616 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:05.950857   57616 out.go:204]   - Booting up control plane ...
	I0812 11:49:05.950970   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:05.951060   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:05.952093   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:05.971023   57616 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:05.978207   57616 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:05.978421   57616 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:06.109216   57616 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:06.109362   57616 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:49:04.061117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.133143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.110595   57616 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001459707s
	I0812 11:49:07.110732   57616 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:12.112776   57616 kubeadm.go:310] [api-check] The API server is healthy after 5.002008667s
	I0812 11:49:12.126637   57616 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:12.141115   57616 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:12.166337   57616 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:12.166727   57616 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-993542 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:12.180548   57616 kubeadm.go:310] [bootstrap-token] Using token: jiwh9x.y6rsv6xjvwdwkbct
	I0812 11:49:12.182174   57616 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:12.182276   57616 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:12.191053   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:12.203294   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:12.208858   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:12.215501   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:12.227747   57616 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:12.520136   57616 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:12.964503   57616 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:13.523969   57616 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:13.524831   57616 kubeadm.go:310] 
	I0812 11:49:13.524954   57616 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:13.524973   57616 kubeadm.go:310] 
	I0812 11:49:13.525098   57616 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:13.525113   57616 kubeadm.go:310] 
	I0812 11:49:13.525147   57616 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:13.525220   57616 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:13.525311   57616 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:13.525325   57616 kubeadm.go:310] 
	I0812 11:49:13.525411   57616 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:13.525420   57616 kubeadm.go:310] 
	I0812 11:49:13.525489   57616 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:13.525503   57616 kubeadm.go:310] 
	I0812 11:49:13.525572   57616 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:13.525690   57616 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:13.525780   57616 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:13.525790   57616 kubeadm.go:310] 
	I0812 11:49:13.525905   57616 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:13.526000   57616 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:13.526011   57616 kubeadm.go:310] 
	I0812 11:49:13.526119   57616 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526271   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:13.526307   57616 kubeadm.go:310] 	--control-plane 
	I0812 11:49:13.526317   57616 kubeadm.go:310] 
	I0812 11:49:13.526420   57616 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:13.526429   57616 kubeadm.go:310] 
	I0812 11:49:13.526527   57616 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526653   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:13.527630   57616 kubeadm.go:310] W0812 11:49:05.056260    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528000   57616 kubeadm.go:310] W0812 11:49:05.058135    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528149   57616 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:13.528175   57616 cni.go:84] Creating CNI manager for ""
	I0812 11:49:13.528189   57616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:13.529938   57616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:13.213137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:13.531443   57616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:13.542933   57616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:13.562053   57616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:13.562181   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:13.562196   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-993542 minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=no-preload-993542 minikube.k8s.io/primary=true
	I0812 11:49:13.764006   57616 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:13.764145   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.264728   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.764225   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.264599   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.764919   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.943701   56845 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.266539018s)
	I0812 11:49:15.943778   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:15.959746   56845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:15.970630   56845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:15.980712   56845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:15.980729   56845 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:15.980775   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:15.990070   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:15.990133   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:15.999602   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:16.008767   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:16.008855   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:16.019564   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.028585   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:16.028660   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.037916   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:16.047028   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:16.047087   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:16.056780   56845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:16.104764   56845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:49:16.104848   56845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:16.239085   56845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:16.239218   56845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:16.239309   56845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:16.456581   56845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:16.458619   56845 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:16.458731   56845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:16.458805   56845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:16.458927   56845 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:16.459037   56845 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:16.459121   56845 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:16.459191   56845 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:16.459281   56845 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:16.459385   56845 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:16.459469   56845 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:16.459569   56845 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:16.459643   56845 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:16.459734   56845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:16.579477   56845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:16.765880   56845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:16.885469   56845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:16.955885   56845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:17.091576   56845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:17.092005   56845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:17.094454   56845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:17.096720   56845 out.go:204]   - Booting up control plane ...
	I0812 11:49:17.096850   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:17.096976   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:17.098357   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:17.115656   56845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:17.116069   56845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:17.116128   56845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:17.256475   56845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:17.256550   56845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:49:17.758741   56845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271569ms
	I0812 11:49:17.758818   56845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:16.264606   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:16.764905   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.264989   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.765205   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.265008   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.380060   57616 kubeadm.go:1113] duration metric: took 4.817945872s to wait for elevateKubeSystemPrivileges
	I0812 11:49:18.380107   57616 kubeadm.go:394] duration metric: took 5m0.782175026s to StartCluster
	I0812 11:49:18.380131   57616 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.380237   57616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:18.382942   57616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.383329   57616 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:18.383406   57616 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:18.383564   57616 addons.go:69] Setting storage-provisioner=true in profile "no-preload-993542"
	I0812 11:49:18.383573   57616 addons.go:69] Setting default-storageclass=true in profile "no-preload-993542"
	I0812 11:49:18.383603   57616 addons.go:234] Setting addon storage-provisioner=true in "no-preload-993542"
	W0812 11:49:18.383618   57616 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:18.383620   57616 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:49:18.383634   57616 addons.go:69] Setting metrics-server=true in profile "no-preload-993542"
	I0812 11:49:18.383653   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.383621   57616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-993542"
	I0812 11:49:18.383662   57616 addons.go:234] Setting addon metrics-server=true in "no-preload-993542"
	W0812 11:49:18.383674   57616 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:18.383708   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.384042   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384072   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384089   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384117   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384181   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384211   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.386531   57616 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:18.388412   57616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:18.404269   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0812 11:49:18.404302   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0812 11:49:18.404279   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0812 11:49:18.405011   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405062   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405012   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405601   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405603   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405621   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405636   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405743   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405769   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.406150   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406174   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406184   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406762   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.406786   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.407101   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.407395   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.407420   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.411782   57616 addons.go:234] Setting addon default-storageclass=true in "no-preload-993542"
	W0812 11:49:18.411813   57616 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:18.411843   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.412202   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.412241   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.428999   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0812 11:49:18.429469   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430064   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.430087   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.430147   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0812 11:49:18.430442   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.430500   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430762   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.431525   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.431539   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.431950   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.432152   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.432474   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0812 11:49:18.432876   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.433599   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.433618   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.433872   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434119   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.434381   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434819   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.434875   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.436590   57616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:18.436703   57616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:16.285160   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:18.438442   57616 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.438466   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:18.438489   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.438698   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:18.438713   57616 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:18.438731   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.443927   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.443965   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444276   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444315   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444373   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.444614   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.444790   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444824   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444851   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445055   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.445427   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.445624   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.445776   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445938   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.457462   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 11:49:18.457995   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.458573   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.458602   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.459048   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.459315   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.461486   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.461753   57616 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.461770   57616 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:18.461788   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.465243   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465776   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.465803   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465981   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.466172   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.466325   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.466478   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.649285   57616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:18.666240   57616 node_ready.go:35] waiting up to 6m0s for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675741   57616 node_ready.go:49] node "no-preload-993542" has status "Ready":"True"
	I0812 11:49:18.675769   57616 node_ready.go:38] duration metric: took 9.489483ms for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675781   57616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:18.687934   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:18.762652   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.769504   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:18.769533   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:18.801182   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.815215   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:18.815249   57616 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:18.869830   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:18.869856   57616 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:18.943609   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:19.326108   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326145   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326183   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326200   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326517   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326543   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326558   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326571   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326577   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326580   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326586   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326588   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326597   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326598   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326969   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326997   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327005   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.327232   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327247   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.349315   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.349341   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.349693   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.349737   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.349746   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.620732   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.620765   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621097   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.621143   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621160   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621170   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.621182   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621446   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621469   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621481   57616 addons.go:475] Verifying addon metrics-server=true in "no-preload-993542"
	I0812 11:49:19.624757   57616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:19.626510   57616 addons.go:510] duration metric: took 1.243102289s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:20.695552   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:22.762626   56845 kubeadm.go:310] [api-check] The API server is healthy after 5.002108915s
	I0812 11:49:22.782365   56845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:22.794869   56845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:22.829058   56845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:22.829314   56845 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-093615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:22.842722   56845 kubeadm.go:310] [bootstrap-token] Using token: e42mo3.61s6ofjvy51u5vh7
	I0812 11:49:22.844590   56845 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:22.844745   56845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:22.851804   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:22.861419   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:22.866597   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:22.870810   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:22.886117   56845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:22.365060   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:23.168156   56845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:23.612002   56845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:24.170270   56845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:24.171014   56845 kubeadm.go:310] 
	I0812 11:49:24.171076   56845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:24.171084   56845 kubeadm.go:310] 
	I0812 11:49:24.171146   56845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:24.171153   56845 kubeadm.go:310] 
	I0812 11:49:24.171204   56845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:24.171801   56845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:24.171846   56845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:24.171853   56845 kubeadm.go:310] 
	I0812 11:49:24.171954   56845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:24.171975   56845 kubeadm.go:310] 
	I0812 11:49:24.172039   56845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:24.172051   56845 kubeadm.go:310] 
	I0812 11:49:24.172125   56845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:24.172247   56845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:24.172360   56845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:24.172378   56845 kubeadm.go:310] 
	I0812 11:49:24.172498   56845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:24.172601   56845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:24.172611   56845 kubeadm.go:310] 
	I0812 11:49:24.172772   56845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.172908   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:24.172944   56845 kubeadm.go:310] 	--control-plane 
	I0812 11:49:24.172953   56845 kubeadm.go:310] 
	I0812 11:49:24.173063   56845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:24.173073   56845 kubeadm.go:310] 
	I0812 11:49:24.173209   56845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.173363   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:24.173919   56845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:24.173990   56845 cni.go:84] Creating CNI manager for ""
	I0812 11:49:24.174008   56845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:24.176549   56845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:22.696279   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.194695   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.695175   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:25.695199   57616 pod_ready.go:81] duration metric: took 7.007233179s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.695209   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:24.177809   56845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:24.188175   56845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:24.208060   56845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:24.208152   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.208209   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-093615 minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=embed-certs-093615 minikube.k8s.io/primary=true
	I0812 11:49:24.393211   56845 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:24.393296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.894092   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.394229   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.893667   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.394057   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.893509   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.394296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.893453   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.441104   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:27.702461   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:28.202582   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.202608   57616 pod_ready.go:81] duration metric: took 2.507391262s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.202621   57616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207529   57616 pod_ready.go:92] pod "etcd-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.207551   57616 pod_ready.go:81] duration metric: took 4.923206ms for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207560   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212760   57616 pod_ready.go:92] pod "kube-apiserver-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.212794   57616 pod_ready.go:81] duration metric: took 5.223592ms for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212807   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.216970   57616 pod_ready.go:92] pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.216993   57616 pod_ready.go:81] duration metric: took 4.177186ms for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.217004   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221078   57616 pod_ready.go:92] pod "kube-proxy-8jwkz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.221096   57616 pod_ready.go:81] duration metric: took 4.085629ms for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221105   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600004   57616 pod_ready.go:92] pod "kube-scheduler-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.600031   57616 pod_ready.go:81] duration metric: took 378.92044ms for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600039   57616 pod_ready.go:38] duration metric: took 9.924247425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:28.600053   57616 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:28.600102   57616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:28.615007   57616 api_server.go:72] duration metric: took 10.231634381s to wait for apiserver process to appear ...
	I0812 11:49:28.615043   57616 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:28.615063   57616 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8443/healthz ...
	I0812 11:49:28.620301   57616 api_server.go:279] https://192.168.61.148:8443/healthz returned 200:
	ok
	I0812 11:49:28.621814   57616 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:49:28.621843   57616 api_server.go:131] duration metric: took 6.792657ms to wait for apiserver health ...
	I0812 11:49:28.621858   57616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:28.804172   57616 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:28.804204   57616 system_pods.go:61] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:28.804208   57616 system_pods.go:61] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:28.804213   57616 system_pods.go:61] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:28.804216   57616 system_pods.go:61] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:28.804219   57616 system_pods.go:61] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:28.804224   57616 system_pods.go:61] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:28.804227   57616 system_pods.go:61] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:28.804232   57616 system_pods.go:61] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:28.804236   57616 system_pods.go:61] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:28.804244   57616 system_pods.go:74] duration metric: took 182.379622ms to wait for pod list to return data ...
	I0812 11:49:28.804251   57616 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:28.999537   57616 default_sa.go:45] found service account: "default"
	I0812 11:49:28.999571   57616 default_sa.go:55] duration metric: took 195.31354ms for default service account to be created ...
	I0812 11:49:28.999582   57616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:29.205266   57616 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:29.205296   57616 system_pods.go:89] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:29.205301   57616 system_pods.go:89] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:29.205306   57616 system_pods.go:89] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:29.205310   57616 system_pods.go:89] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:29.205315   57616 system_pods.go:89] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:29.205319   57616 system_pods.go:89] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:29.205323   57616 system_pods.go:89] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:29.205329   57616 system_pods.go:89] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:29.205335   57616 system_pods.go:89] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:29.205342   57616 system_pods.go:126] duration metric: took 205.754437ms to wait for k8s-apps to be running ...
	I0812 11:49:29.205348   57616 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:29.205390   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:29.220297   57616 system_svc.go:56] duration metric: took 14.940181ms WaitForService to wait for kubelet
	I0812 11:49:29.220343   57616 kubeadm.go:582] duration metric: took 10.836962086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:29.220369   57616 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:29.400598   57616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:29.400634   57616 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:29.400648   57616 node_conditions.go:105] duration metric: took 180.272764ms to run NodePressure ...
	I0812 11:49:29.400663   57616 start.go:241] waiting for startup goroutines ...
	I0812 11:49:29.400675   57616 start.go:246] waiting for cluster config update ...
	I0812 11:49:29.400691   57616 start.go:255] writing updated cluster config ...
	I0812 11:49:29.401086   57616 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:29.454975   57616 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:49:29.457349   57616 out.go:177] * Done! kubectl is now configured to use "no-preload-993542" cluster and "default" namespace by default
	I0812 11:49:28.394104   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:28.894284   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.393380   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.893417   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.394034   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.893668   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.394322   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.894069   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.393691   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.893944   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.517192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:33.393880   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:33.894126   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.393857   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.893356   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.394181   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.894116   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.393690   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.893650   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.394325   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.524187   56845 kubeadm.go:1113] duration metric: took 13.316085022s to wait for elevateKubeSystemPrivileges
	I0812 11:49:37.524225   56845 kubeadm.go:394] duration metric: took 5m12.500523071s to StartCluster
	I0812 11:49:37.524246   56845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.524334   56845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:37.526822   56845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.527125   56845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.191 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:37.527189   56845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:37.527272   56845 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-093615"
	I0812 11:49:37.527285   56845 addons.go:69] Setting default-storageclass=true in profile "embed-certs-093615"
	I0812 11:49:37.527307   56845 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-093615"
	I0812 11:49:37.527307   56845 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0812 11:49:37.527315   56845 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:37.527318   56845 addons.go:69] Setting metrics-server=true in profile "embed-certs-093615"
	I0812 11:49:37.527337   56845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-093615"
	I0812 11:49:37.527345   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527362   56845 addons.go:234] Setting addon metrics-server=true in "embed-certs-093615"
	W0812 11:49:37.527375   56845 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:37.527413   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527791   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527816   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527798   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527928   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.528806   56845 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:37.530366   56845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:37.544367   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0812 11:49:37.544919   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0812 11:49:37.545052   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545492   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545535   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.545551   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546095   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.546220   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.546247   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546267   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.547090   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.547667   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.547697   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.548008   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0812 11:49:37.550024   56845 addons.go:234] Setting addon default-storageclass=true in "embed-certs-093615"
	W0812 11:49:37.550048   56845 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:37.550079   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.550469   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.550500   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.550728   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.551342   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.551373   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.551748   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.552314   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.552354   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.566505   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0812 11:49:37.567085   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.567510   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.567526   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.567900   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.568133   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.570307   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.571789   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0812 11:49:37.572127   56845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:37.572191   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.572730   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.572752   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.573044   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0812 11:49:37.573231   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.573619   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.573815   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.573840   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.573849   56845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.573870   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:37.573890   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.574787   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.574809   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.575722   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.575937   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.578054   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578069   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.578536   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.578565   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578833   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.579012   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.579170   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.579326   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.580007   56845 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:37.581298   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:37.581313   56845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:37.581334   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.585114   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585809   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.585839   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585914   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.586160   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.586338   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.586476   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.591678   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0812 11:49:37.592146   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.592684   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.592702   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.593075   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.593241   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.595117   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.595398   56845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.595413   56845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:37.595430   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.598417   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.598771   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.598792   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.599008   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.599209   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.599369   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.599507   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.757714   56845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:37.783594   56845 node_ready.go:35] waiting up to 6m0s for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801679   56845 node_ready.go:49] node "embed-certs-093615" has status "Ready":"True"
	I0812 11:49:37.801707   56845 node_ready.go:38] duration metric: took 18.078817ms for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801719   56845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:37.814704   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:37.860064   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.913642   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:37.913673   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:37.932638   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.948027   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:37.948052   56845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:38.000773   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.000805   56845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:38.050478   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.655431   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655458   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655477   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655460   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655760   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655875   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655888   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655897   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655792   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655971   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655979   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655986   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655812   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.655832   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656156   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656161   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656172   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.656199   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656225   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656231   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707240   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.707268   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.707596   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.707618   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707667   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.832725   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.832758   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833072   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833114   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833134   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833155   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.833165   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833416   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833461   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833472   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833483   56845 addons.go:475] Verifying addon metrics-server=true in "embed-certs-093615"
	I0812 11:49:38.835319   56845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:34.589171   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:38.836977   56845 addons.go:510] duration metric: took 1.309786928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:39.827672   56845 pod_ready.go:102] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:40.820793   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.820818   56845 pod_ready.go:81] duration metric: took 3.006078866s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.820828   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825674   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.825696   56845 pod_ready.go:81] duration metric: took 4.862671ms for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825705   56845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830668   56845 pod_ready.go:92] pod "etcd-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.830690   56845 pod_ready.go:81] duration metric: took 4.979449ms for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830699   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834732   56845 pod_ready.go:92] pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.834750   56845 pod_ready.go:81] duration metric: took 4.044023ms for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834759   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838476   56845 pod_ready.go:92] pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.838493   56845 pod_ready.go:81] duration metric: took 3.728686ms for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838502   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219756   56845 pod_ready.go:92] pod "kube-proxy-26xvl" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.219778   56845 pod_ready.go:81] duration metric: took 381.271425ms for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219789   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619078   56845 pod_ready.go:92] pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.619107   56845 pod_ready.go:81] duration metric: took 399.30989ms for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619117   56845 pod_ready.go:38] duration metric: took 3.817386457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:41.619135   56845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:41.619197   56845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:41.634452   56845 api_server.go:72] duration metric: took 4.107285578s to wait for apiserver process to appear ...
	I0812 11:49:41.634480   56845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:41.634505   56845 api_server.go:253] Checking apiserver healthz at https://192.168.72.191:8443/healthz ...
	I0812 11:49:41.639610   56845 api_server.go:279] https://192.168.72.191:8443/healthz returned 200:
	ok
	I0812 11:49:41.640514   56845 api_server.go:141] control plane version: v1.30.3
	I0812 11:49:41.640537   56845 api_server.go:131] duration metric: took 6.049802ms to wait for apiserver health ...
	I0812 11:49:41.640547   56845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:41.823614   56845 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:41.823652   56845 system_pods.go:61] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:41.823659   56845 system_pods.go:61] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:41.823665   56845 system_pods.go:61] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:41.823670   56845 system_pods.go:61] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:41.823675   56845 system_pods.go:61] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:41.823680   56845 system_pods.go:61] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:41.823685   56845 system_pods.go:61] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:41.823693   56845 system_pods.go:61] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:41.823697   56845 system_pods.go:61] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:41.823704   56845 system_pods.go:74] duration metric: took 183.151482ms to wait for pod list to return data ...
	I0812 11:49:41.823711   56845 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:42.017840   56845 default_sa.go:45] found service account: "default"
	I0812 11:49:42.017870   56845 default_sa.go:55] duration metric: took 194.151916ms for default service account to be created ...
	I0812 11:49:42.017886   56845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:42.222050   56845 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:42.222084   56845 system_pods.go:89] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:42.222092   56845 system_pods.go:89] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:42.222098   56845 system_pods.go:89] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:42.222104   56845 system_pods.go:89] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:42.222110   56845 system_pods.go:89] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:42.222116   56845 system_pods.go:89] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:42.222122   56845 system_pods.go:89] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:42.222133   56845 system_pods.go:89] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:42.222140   56845 system_pods.go:89] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:42.222157   56845 system_pods.go:126] duration metric: took 204.263322ms to wait for k8s-apps to be running ...
	I0812 11:49:42.222169   56845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:42.222224   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:42.235891   56845 system_svc.go:56] duration metric: took 13.715083ms WaitForService to wait for kubelet
	I0812 11:49:42.235920   56845 kubeadm.go:582] duration metric: took 4.708757648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:42.235945   56845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:42.418727   56845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:42.418761   56845 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:42.418773   56845 node_conditions.go:105] duration metric: took 182.823582ms to run NodePressure ...
	I0812 11:49:42.418789   56845 start.go:241] waiting for startup goroutines ...
	I0812 11:49:42.418799   56845 start.go:246] waiting for cluster config update ...
	I0812 11:49:42.418812   56845 start.go:255] writing updated cluster config ...
	I0812 11:49:42.419150   56845 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:42.468981   56845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:49:42.471931   56845 out.go:177] * Done! kubectl is now configured to use "embed-certs-093615" cluster and "default" namespace by default
	I0812 11:49:40.669207   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:43.741090   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:49.821138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:52.893281   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:58.973141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:02.045165   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:08.129133   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:11.197137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:17.277119   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:20.349149   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:26.429100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:29.501158   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:35.581137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:38.653143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:44.733130   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:47.805192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:53.885100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:56.957154   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:03.037201   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:06.109079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:12.189138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:15.261132   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 
	I0812 11:51:21.341126   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:24.413107   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:30.493143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:33.569122   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:36.569554   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:51:36.569591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.569943   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:51:36.569973   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.570201   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:51:36.571680   59908 machine.go:97] duration metric: took 4m37.426765365s to provisionDockerMachine
	I0812 11:51:36.571724   59908 fix.go:56] duration metric: took 4m37.448153773s for fixHost
	I0812 11:51:36.571736   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 4m37.448177825s
	W0812 11:51:36.571759   59908 start.go:714] error starting host: provision: host is not running
	W0812 11:51:36.571863   59908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0812 11:51:36.571879   59908 start.go:729] Will try again in 5 seconds ...
	I0812 11:51:41.573924   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:51:41.574052   59908 start.go:364] duration metric: took 85.852µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:51:41.574082   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:51:41.574092   59908 fix.go:54] fixHost starting: 
	I0812 11:51:41.574362   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:51:41.574405   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:51:41.589947   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0812 11:51:41.590440   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:51:41.590917   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:51:41.590937   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:51:41.591264   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:51:41.591434   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:51:41.591577   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:51:41.593079   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Stopped err=<nil>
	I0812 11:51:41.593104   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	W0812 11:51:41.593250   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:51:41.595246   59908 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-581883" ...
	I0812 11:51:41.596770   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Start
	I0812 11:51:41.596979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring networks are active...
	I0812 11:51:41.598006   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network default is active
	I0812 11:51:41.598500   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network mk-default-k8s-diff-port-581883 is active
	I0812 11:51:41.598920   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Getting domain xml...
	I0812 11:51:41.599684   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Creating domain...
	I0812 11:51:42.863317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting to get IP...
	I0812 11:51:42.864358   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864816   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864907   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:42.864802   61181 retry.go:31] will retry after 220.174363ms: waiting for machine to come up
	I0812 11:51:43.086204   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086832   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086861   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.086783   61181 retry.go:31] will retry after 342.897936ms: waiting for machine to come up
	I0812 11:51:43.431059   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431584   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.431497   61181 retry.go:31] will retry after 465.154278ms: waiting for machine to come up
	I0812 11:51:43.898042   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898580   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898604   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.898518   61181 retry.go:31] will retry after 498.287765ms: waiting for machine to come up
	I0812 11:51:44.398086   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398736   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398763   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:44.398682   61181 retry.go:31] will retry after 617.809106ms: waiting for machine to come up
	I0812 11:51:45.018733   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019307   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.019217   61181 retry.go:31] will retry after 864.46319ms: waiting for machine to come up
	I0812 11:51:45.885081   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885555   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885585   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.885529   61181 retry.go:31] will retry after 1.067767105s: waiting for machine to come up
	I0812 11:51:46.954710   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955061   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955087   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:46.955020   61181 retry.go:31] will retry after 927.472236ms: waiting for machine to come up
	I0812 11:51:47.883766   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:47.884146   61181 retry.go:31] will retry after 1.493170608s: waiting for machine to come up
	I0812 11:51:49.378898   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379350   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:49.379297   61181 retry.go:31] will retry after 1.599397392s: waiting for machine to come up
	I0812 11:51:50.981013   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981714   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981745   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:50.981642   61181 retry.go:31] will retry after 1.779019847s: waiting for machine to come up
	I0812 11:51:52.762246   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762707   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:52.762629   61181 retry.go:31] will retry after 3.410620248s: waiting for machine to come up
	I0812 11:51:56.175010   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175542   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175573   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:56.175490   61181 retry.go:31] will retry after 3.890343984s: waiting for machine to come up
	I0812 11:52:00.069904   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has current primary IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070606   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Found IP for machine: 192.168.50.114
	I0812 11:52:00.070616   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserving static IP address...
	I0812 11:52:00.071153   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserved static IP address: 192.168.50.114
	I0812 11:52:00.071183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for SSH to be available...
	I0812 11:52:00.071206   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.071228   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | skip adding static IP to network mk-default-k8s-diff-port-581883 - found existing host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"}
	I0812 11:52:00.071242   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Getting to WaitForSSH function...
	I0812 11:52:00.073315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073647   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.073676   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073838   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH client type: external
	I0812 11:52:00.073868   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa (-rw-------)
	I0812 11:52:00.073909   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:52:00.073926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | About to run SSH command:
	I0812 11:52:00.073941   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | exit 0
	I0812 11:52:00.201064   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | SSH cmd err, output: <nil>: 
	I0812 11:52:00.201417   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetConfigRaw
	I0812 11:52:00.202026   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.204566   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.204855   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.204895   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.205179   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:52:00.205369   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:52:00.205387   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:00.205698   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.208214   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.208656   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208749   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.208932   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209111   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209227   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.209359   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.209519   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.209529   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:52:00.317075   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 11:52:00.317106   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317394   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:52:00.317427   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317617   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.320809   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321256   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.321297   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321415   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.321625   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321793   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321927   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.322174   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.322337   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.322350   59908 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-581883 && echo "default-k8s-diff-port-581883" | sudo tee /etc/hostname
	I0812 11:52:00.448512   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-581883
	
	I0812 11:52:00.448544   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.451372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.451915   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.451942   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.452144   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.452341   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452510   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452661   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.452823   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.453021   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.453038   59908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-581883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-581883/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-581883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:52:00.569754   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:52:00.569791   59908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:52:00.569808   59908 buildroot.go:174] setting up certificates
	I0812 11:52:00.569818   59908 provision.go:84] configureAuth start
	I0812 11:52:00.569829   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.570114   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.572834   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.573357   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.576212   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.576717   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576915   59908 provision.go:143] copyHostCerts
	I0812 11:52:00.576979   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:52:00.576989   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:52:00.577051   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:52:00.577148   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:52:00.577157   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:52:00.577184   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:52:00.577241   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:52:00.577248   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:52:00.577270   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:52:00.577366   59908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-581883 san=[127.0.0.1 192.168.50.114 default-k8s-diff-port-581883 localhost minikube]
	I0812 11:52:01.053674   59908 provision.go:177] copyRemoteCerts
	I0812 11:52:01.053733   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:52:01.053756   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.056305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.056840   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.056894   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.057105   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.057325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.057486   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.057641   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.142765   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0812 11:52:01.168430   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:52:01.193360   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:52:01.218125   59908 provision.go:87] duration metric: took 648.29686ms to configureAuth
	I0812 11:52:01.218151   59908 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:52:01.218337   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:01.218432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.221497   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.221858   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.221887   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.222077   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.222261   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222436   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222596   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.222736   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.222963   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.222986   59908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:52:01.490986   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:52:01.491013   59908 machine.go:97] duration metric: took 1.285630113s to provisionDockerMachine
	I0812 11:52:01.491026   59908 start.go:293] postStartSetup for "default-k8s-diff-port-581883" (driver="kvm2")
	I0812 11:52:01.491038   59908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:52:01.491054   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.491385   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:52:01.491414   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.494451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.494830   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.494881   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.495025   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.495216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.495372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.495522   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.579756   59908 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:52:01.583802   59908 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:52:01.583828   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:52:01.583952   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:52:01.584051   59908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:52:01.584167   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:52:01.593940   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:01.619301   59908 start.go:296] duration metric: took 128.258855ms for postStartSetup
	I0812 11:52:01.619343   59908 fix.go:56] duration metric: took 20.045251384s for fixHost
	I0812 11:52:01.619365   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.622507   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.622917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.622954   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.623116   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.623308   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623461   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.623803   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.624015   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.624031   59908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:52:01.733552   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723463521.708750952
	
	I0812 11:52:01.733588   59908 fix.go:216] guest clock: 1723463521.708750952
	I0812 11:52:01.733613   59908 fix.go:229] Guest: 2024-08-12 11:52:01.708750952 +0000 UTC Remote: 2024-08-12 11:52:01.619347823 +0000 UTC m=+302.640031526 (delta=89.403129ms)
	I0812 11:52:01.733639   59908 fix.go:200] guest clock delta is within tolerance: 89.403129ms
	I0812 11:52:01.733646   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 20.15958144s
	I0812 11:52:01.733673   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.733971   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:01.736957   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737359   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.737388   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738113   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738404   59908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:52:01.738444   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.738710   59908 ssh_runner.go:195] Run: cat /version.json
	I0812 11:52:01.738746   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.741424   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741906   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.741935   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742092   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742293   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742487   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742501   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742693   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742709   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.742854   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.821742   59908 ssh_runner.go:195] Run: systemctl --version
	I0812 11:52:01.854649   59908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:52:01.994050   59908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:52:02.000754   59908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:52:02.000848   59908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:52:02.017212   59908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:52:02.017240   59908 start.go:495] detecting cgroup driver to use...
	I0812 11:52:02.017310   59908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:52:02.035650   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:52:02.050036   59908 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:52:02.050114   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:52:02.063916   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:52:02.078938   59908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:52:02.194945   59908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:52:02.366538   59908 docker.go:233] disabling docker service ...
	I0812 11:52:02.366616   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:52:02.380648   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:52:02.393284   59908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:52:02.513560   59908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:52:02.638028   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:52:02.662395   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:52:02.683732   59908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:52:02.683798   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.695379   59908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:52:02.695437   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.706905   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.718338   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.729708   59908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:52:02.740127   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.750198   59908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.766470   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.777845   59908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:52:02.788254   59908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:52:02.788322   59908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:52:02.800552   59908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:52:02.809932   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:02.950568   59908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:52:03.087957   59908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:52:03.088031   59908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:52:03.094543   59908 start.go:563] Will wait 60s for crictl version
	I0812 11:52:03.094597   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:52:03.098447   59908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:52:03.139477   59908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:52:03.139561   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.169931   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.202808   59908 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:52:03.203979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:03.206641   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207046   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:03.207078   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207300   59908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 11:52:03.211169   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:03.222676   59908 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:52:03.222798   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:52:03.222835   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:03.258003   59908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 11:52:03.258074   59908 ssh_runner.go:195] Run: which lz4
	I0812 11:52:03.261945   59908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 11:52:03.266002   59908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:52:03.266035   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 11:52:04.616538   59908 crio.go:462] duration metric: took 1.354621946s to copy over tarball
	I0812 11:52:04.616600   59908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:52:06.801880   59908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.185257635s)
	I0812 11:52:06.801905   59908 crio.go:469] duration metric: took 2.18534207s to extract the tarball
	I0812 11:52:06.801912   59908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:52:06.840167   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:06.887647   59908 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:52:06.887669   59908 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:52:06.887677   59908 kubeadm.go:934] updating node { 192.168.50.114 8444 v1.30.3 crio true true} ...
	I0812 11:52:06.887780   59908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-581883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:52:06.887863   59908 ssh_runner.go:195] Run: crio config
	I0812 11:52:06.944347   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:06.944372   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:06.944385   59908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:52:06.944404   59908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-581883 NodeName:default-k8s-diff-port-581883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:52:06.944582   59908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-581883"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:52:06.944660   59908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:52:06.954792   59908 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:52:06.954853   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:52:06.964625   59908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0812 11:52:06.981467   59908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:52:06.998649   59908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0812 11:52:07.017062   59908 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0812 11:52:07.020710   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:07.033442   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:07.164673   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:07.183526   59908 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883 for IP: 192.168.50.114
	I0812 11:52:07.183574   59908 certs.go:194] generating shared ca certs ...
	I0812 11:52:07.183598   59908 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:07.183769   59908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:52:07.183813   59908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:52:07.183827   59908 certs.go:256] generating profile certs ...
	I0812 11:52:07.183948   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/client.key
	I0812 11:52:07.184117   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key.ebc625f3
	I0812 11:52:07.184198   59908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key
	I0812 11:52:07.184361   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:52:07.184402   59908 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:52:07.184416   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:52:07.184448   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:52:07.184478   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:52:07.184509   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:52:07.184562   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:07.185388   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:52:07.217465   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:52:07.248781   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:52:07.278177   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:52:07.313023   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 11:52:07.336720   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:52:07.360266   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:52:07.388850   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:52:07.413532   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:52:07.438304   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:52:07.462084   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:52:07.486176   59908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:52:07.504165   59908 ssh_runner.go:195] Run: openssl version
	I0812 11:52:07.510273   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:52:07.520671   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525096   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525158   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.531038   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:52:07.542971   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:52:07.554939   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559868   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559928   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.565655   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:52:07.578139   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:52:07.589333   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594679   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594755   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.600616   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:52:07.612028   59908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:52:07.617247   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:52:07.623826   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:52:07.630443   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:52:07.637184   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:52:07.643723   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:52:07.650269   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:52:07.657049   59908 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:52:07.657136   59908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:52:07.657218   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.695064   59908 cri.go:89] found id: ""
	I0812 11:52:07.695136   59908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:52:07.705707   59908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 11:52:07.705725   59908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 11:52:07.705781   59908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 11:52:07.715748   59908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:52:07.717230   59908 kubeconfig.go:125] found "default-k8s-diff-port-581883" server: "https://192.168.50.114:8444"
	I0812 11:52:07.720217   59908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 11:52:07.730557   59908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.114
	I0812 11:52:07.730596   59908 kubeadm.go:1160] stopping kube-system containers ...
	I0812 11:52:07.730609   59908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 11:52:07.730672   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.766039   59908 cri.go:89] found id: ""
	I0812 11:52:07.766114   59908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 11:52:07.784359   59908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:52:07.794750   59908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:52:07.794781   59908 kubeadm.go:157] found existing configuration files:
	
	I0812 11:52:07.794957   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0812 11:52:07.805063   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:52:07.805137   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:52:07.815283   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0812 11:52:07.825460   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:52:07.825535   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:52:07.836322   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.846381   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:52:07.846438   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.856471   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0812 11:52:07.866349   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:52:07.866415   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:52:07.876379   59908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:52:07.886723   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:07.993071   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.756027   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.978821   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.048377   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.146562   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:52:09.146658   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:09.647073   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.147700   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.647212   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.147702   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.174640   59908 api_server.go:72] duration metric: took 2.028079757s to wait for apiserver process to appear ...
	I0812 11:52:11.174665   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:52:11.174698   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:11.175152   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:11.674838   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:16.675764   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:16.675832   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:21.676084   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:21.676129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:26.676483   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:26.676531   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.676994   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:31.677032   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.841007   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": read tcp 192.168.50.1:45150->192.168.50.114:8444: read: connection reset by peer
	I0812 11:52:32.175501   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:32.176109   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:32.675714   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:37.676528   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:37.676575   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:42.677744   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:42.677782   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:47.679062   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:47.679139   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.075690   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.075722   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.075736   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.231100   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.231129   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.231143   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.273525   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.273564   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:50.675005   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.681580   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.681621   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.175129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.188048   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.188075   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.675218   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.684784   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.684822   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.175465   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.179666   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.179686   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.675234   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.680948   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.680972   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.175533   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.180849   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.180889   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.675084   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.680320   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.680352   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.175057   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.180061   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.180087   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.675117   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.679922   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.679950   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.175569   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.179883   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:55.179908   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.675522   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.680182   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:52:55.686477   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:52:55.686505   59908 api_server.go:131] duration metric: took 44.511833813s to wait for apiserver health ...
	I0812 11:52:55.686513   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:55.686519   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:55.688415   59908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:52:55.689745   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:52:55.700910   59908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:52:55.719588   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:52:55.729581   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:52:55.729622   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 11:52:55.729630   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:52:55.729640   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 11:52:55.729651   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 11:52:55.729662   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0812 11:52:55.729673   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:52:55.729682   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:52:55.729693   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 11:52:55.729702   59908 system_pods.go:74] duration metric: took 10.095218ms to wait for pod list to return data ...
	I0812 11:52:55.729712   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:52:55.733812   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:52:55.733841   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:52:55.733857   59908 node_conditions.go:105] duration metric: took 4.136436ms to run NodePressure ...
	I0812 11:52:55.733877   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:56.014193   59908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026600   59908 kubeadm.go:739] kubelet initialised
	I0812 11:52:56.026629   59908 kubeadm.go:740] duration metric: took 12.405458ms waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026637   59908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:56.031669   59908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.042499   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042526   59908 pod_ready.go:81] duration metric: took 10.82967ms for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.042537   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042547   59908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.048265   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048290   59908 pod_ready.go:81] duration metric: took 5.732651ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.048307   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048315   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.054613   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054639   59908 pod_ready.go:81] duration metric: took 6.314697ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.054652   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054660   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.125380   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125418   59908 pod_ready.go:81] duration metric: took 70.74807ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.125433   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125441   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.523216   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523251   59908 pod_ready.go:81] duration metric: took 397.801141ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.523263   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523272   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.923229   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923269   59908 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.923285   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923295   59908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:57.323846   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323877   59908 pod_ready.go:81] duration metric: took 400.572011ms for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:57.323888   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323896   59908 pod_ready.go:38] duration metric: took 1.297248784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:57.323911   59908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:52:57.336325   59908 ops.go:34] apiserver oom_adj: -16
	I0812 11:52:57.336345   59908 kubeadm.go:597] duration metric: took 49.630615077s to restartPrimaryControlPlane
	I0812 11:52:57.336365   59908 kubeadm.go:394] duration metric: took 49.67932273s to StartCluster
	I0812 11:52:57.336380   59908 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.336447   59908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:52:57.338064   59908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.338331   59908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:52:57.338433   59908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:52:57.338521   59908 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338536   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:57.338551   59908 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338587   59908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-581883"
	I0812 11:52:57.338558   59908 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338662   59908 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:52:57.338695   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.338563   59908 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338755   59908 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338764   59908 addons.go:243] addon metrics-server should already be in state true
	I0812 11:52:57.338788   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.339032   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339033   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339035   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339067   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339084   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339065   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.340300   59908 out.go:177] * Verifying Kubernetes components...
	I0812 11:52:57.342119   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:57.356069   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0812 11:52:57.356172   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0812 11:52:57.356610   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.356723   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.357168   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357189   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357329   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357356   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357543   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.357718   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.358105   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358143   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.358331   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358367   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.360134   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 11:52:57.360536   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.361016   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.361041   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.361371   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.361569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.365260   59908 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.365279   59908 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:52:57.365312   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.365596   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.365639   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.377488   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0812 11:52:57.378076   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.378581   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0812 11:52:57.378657   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.378680   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.378965   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.379025   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.379251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.379656   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.379683   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.380105   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.380391   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.382273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.382496   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.383601   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0812 11:52:57.384062   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.384739   59908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:52:57.384750   59908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:52:57.384914   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.384940   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.385293   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.385956   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.386002   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.386314   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:52:57.386336   59908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:52:57.386355   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.386386   59908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.386398   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:52:57.386416   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.390135   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390335   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390669   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.390729   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391187   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.391251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391393   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391571   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391592   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391722   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.391758   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391921   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.431097   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0812 11:52:57.431600   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.432116   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.432140   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.432506   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.432702   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.434513   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.434753   59908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.434772   59908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:52:57.434791   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.438433   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.438917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.438951   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.439150   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.439384   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.439574   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.439744   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.547325   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:57.566163   59908 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:52:57.633469   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.641330   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:52:57.641355   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:52:57.662909   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.691294   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:52:57.691321   59908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:52:57.746668   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:57.746693   59908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:52:57.787970   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628134   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628195   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628464   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628481   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628490   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628498   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628611   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628626   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628647   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628651   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628775   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628785   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628791   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.630407   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.630424   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.634739   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.634759   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.635034   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.635053   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643171   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643484   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643502   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643511   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643520   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643532   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643732   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643754   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643762   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643771   59908 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-581883"
	I0812 11:52:58.645811   59908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:52:58.647443   59908 addons.go:510] duration metric: took 1.309010451s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:52:59.569732   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:01.570136   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:04.069965   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:05.570009   59908 node_ready.go:49] node "default-k8s-diff-port-581883" has status "Ready":"True"
	I0812 11:53:05.570039   59908 node_ready.go:38] duration metric: took 8.003840242s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:53:05.570050   59908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:53:05.577206   59908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:07.584071   59908 pod_ready.go:102] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:08.583523   59908 pod_ready.go:92] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.583550   59908 pod_ready.go:81] duration metric: took 3.006317399s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.583559   59908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589137   59908 pod_ready.go:92] pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.589163   59908 pod_ready.go:81] duration metric: took 5.595854ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589175   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593746   59908 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.593767   59908 pod_ready.go:81] duration metric: took 4.585829ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593776   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598058   59908 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.598078   59908 pod_ready.go:81] duration metric: took 4.296254ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598087   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603106   59908 pod_ready.go:92] pod "kube-proxy-h6fzz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.603127   59908 pod_ready.go:81] duration metric: took 5.033938ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603136   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981404   59908 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.981429   59908 pod_ready.go:81] duration metric: took 378.286388ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981439   59908 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:10.988175   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:13.488230   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:15.987639   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:18.487540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:20.490803   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:22.987167   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:25.488840   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:27.988661   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:30.487605   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:32.487748   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:34.488109   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:36.987016   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:38.987165   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:40.989187   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:43.487407   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:45.487714   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:47.487961   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:49.988540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:52.487216   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:54.487433   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:56.487958   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:58.489095   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:00.987353   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:02.989138   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:05.488174   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:07.988702   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:10.488396   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:12.988099   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:14.988220   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:16.988395   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:19.491228   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:21.987397   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:23.987898   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:26.487993   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:28.489384   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:30.989371   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:33.488670   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:35.987526   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:37.988823   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:40.488488   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:42.488612   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:44.989023   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:46.990079   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:49.488206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:51.488446   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:53.988007   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:56.488200   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:58.490348   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:00.988756   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:03.487527   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:05.987624   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:07.989990   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:10.487888   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:12.488656   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:14.489648   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:16.988551   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:19.488408   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:21.988902   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:24.487895   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:26.988377   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:29.488082   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:31.986995   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:33.987359   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:35.989125   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:38.489945   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:40.493189   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:42.988399   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:45.487307   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:47.487758   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:49.487798   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:51.987795   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:53.988376   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:55.990060   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:58.487684   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:00.487893   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:02.988185   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:04.988436   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:07.487867   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:09.987976   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:11.988078   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:13.988354   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:15.988676   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:18.488658   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:20.987780   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:23.486965   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:25.487065   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:27.487891   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:29.488825   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:31.988732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:34.487771   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:36.988555   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:39.489154   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:41.987687   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:43.990010   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:45.991210   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:48.487381   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:50.987943   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:53.487657   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:55.987206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:57.988164   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:59.990098   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:02.486732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:04.488492   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:06.987443   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988727   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988756   59908 pod_ready.go:81] duration metric: took 4m0.007310185s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:57:08.988768   59908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0812 11:57:08.988777   59908 pod_ready.go:38] duration metric: took 4m3.418715457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:57:08.988795   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:57:08.988823   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:08.988909   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:09.035203   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.035230   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.035236   59908 cri.go:89] found id: ""
	I0812 11:57:09.035244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:09.035298   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.039940   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.044354   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:09.044430   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:09.079692   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:09.079716   59908 cri.go:89] found id: ""
	I0812 11:57:09.079725   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:09.079788   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.084499   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:09.084576   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:09.124721   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:09.124750   59908 cri.go:89] found id: ""
	I0812 11:57:09.124761   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:09.124828   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.128921   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:09.128997   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:09.164960   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.164982   59908 cri.go:89] found id: ""
	I0812 11:57:09.164995   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:09.165046   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.169043   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:09.169116   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:09.211298   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.211322   59908 cri.go:89] found id: ""
	I0812 11:57:09.211329   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:09.211377   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.215348   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:09.215440   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:09.269500   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.269519   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.269523   59908 cri.go:89] found id: ""
	I0812 11:57:09.269530   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:09.269575   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.273724   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.277660   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:09.277732   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:09.327668   59908 cri.go:89] found id: ""
	I0812 11:57:09.327691   59908 logs.go:276] 0 containers: []
	W0812 11:57:09.327698   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:09.327703   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:09.327765   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:09.363936   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.363957   59908 cri.go:89] found id: ""
	I0812 11:57:09.363964   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:09.364010   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.368123   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:09.368151   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:09.441676   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:09.441725   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.483275   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:09.483317   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.544504   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:09.544539   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.594808   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:09.594839   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.636141   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:09.636178   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.673996   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:09.674023   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.711480   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:09.711504   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.747830   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:09.747861   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:10.268559   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:10.268607   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:10.394461   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:10.394495   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:10.439760   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:10.439796   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:10.474457   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:10.474496   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:10.515430   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:10.515464   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.029229   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:57:13.045764   59908 api_server.go:72] duration metric: took 4m15.707395821s to wait for apiserver process to appear ...
	I0812 11:57:13.045795   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:57:13.045832   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:13.045878   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:13.082792   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:13.082818   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.082824   59908 cri.go:89] found id: ""
	I0812 11:57:13.082833   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:13.082893   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.087987   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.092188   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:13.092251   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:13.135193   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:13.135226   59908 cri.go:89] found id: ""
	I0812 11:57:13.135237   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:13.135293   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.140269   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:13.140344   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:13.193436   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.193458   59908 cri.go:89] found id: ""
	I0812 11:57:13.193465   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:13.193539   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.198507   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:13.198589   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:13.241696   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:13.241718   59908 cri.go:89] found id: ""
	I0812 11:57:13.241725   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:13.241773   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.246865   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:13.246937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:13.293284   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.293308   59908 cri.go:89] found id: ""
	I0812 11:57:13.293315   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:13.293380   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.297698   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:13.297772   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:13.342737   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.342757   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:13.342760   59908 cri.go:89] found id: ""
	I0812 11:57:13.342767   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:13.342809   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.347634   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.351733   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:13.351794   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:13.394540   59908 cri.go:89] found id: ""
	I0812 11:57:13.394570   59908 logs.go:276] 0 containers: []
	W0812 11:57:13.394580   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:13.394594   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:13.394647   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:13.433910   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:13.433934   59908 cri.go:89] found id: ""
	I0812 11:57:13.433944   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:13.434001   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.437999   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:13.438024   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.451945   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:13.451973   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:13.561957   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:13.561990   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.602729   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:13.602754   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:13.673729   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:13.673766   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.714814   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:13.714843   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.755876   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:13.755902   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.814263   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:13.814301   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:14.305206   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:14.305243   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:14.349455   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:14.349486   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:14.399731   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:14.399765   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:14.443494   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:14.443524   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:14.486034   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:14.486070   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:14.524991   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:14.525018   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.062314   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:57:17.068363   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:57:17.069818   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:57:17.069845   59908 api_server.go:131] duration metric: took 4.024042567s to wait for apiserver health ...
	I0812 11:57:17.069856   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:57:17.069882   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:17.069937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:17.107213   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:17.107233   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.107237   59908 cri.go:89] found id: ""
	I0812 11:57:17.107244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:17.107297   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.117678   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.121897   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:17.121962   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:17.159450   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.159480   59908 cri.go:89] found id: ""
	I0812 11:57:17.159489   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:17.159548   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.164078   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:17.164156   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:17.207977   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.208002   59908 cri.go:89] found id: ""
	I0812 11:57:17.208010   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:17.208063   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.212055   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:17.212136   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:17.259289   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.259316   59908 cri.go:89] found id: ""
	I0812 11:57:17.259327   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:17.259393   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.263818   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:17.263896   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:17.301371   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.301404   59908 cri.go:89] found id: ""
	I0812 11:57:17.301413   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:17.301473   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.306038   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:17.306100   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:17.343982   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.344006   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.344017   59908 cri.go:89] found id: ""
	I0812 11:57:17.344027   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:17.344086   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.348135   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.352720   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:17.352790   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:17.392647   59908 cri.go:89] found id: ""
	I0812 11:57:17.392673   59908 logs.go:276] 0 containers: []
	W0812 11:57:17.392682   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:17.392687   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:17.392740   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:17.429067   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.429088   59908 cri.go:89] found id: ""
	I0812 11:57:17.429095   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:17.429140   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.433406   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:17.433433   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.479091   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:17.479123   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:17.519579   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:17.519614   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:17.620109   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:17.620143   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.659604   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:17.659639   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.712850   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:17.712901   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.750567   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:17.750595   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:17.822429   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:17.822459   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.864303   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:17.864338   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.904307   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:17.904340   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.939073   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:17.939103   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.982222   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:17.982253   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:18.369007   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:18.369053   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:18.385187   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:18.385219   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:20.949075   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:57:20.949110   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.949115   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.949119   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.949122   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.949125   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.949128   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.949133   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.949139   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.949146   59908 system_pods.go:74] duration metric: took 3.879283024s to wait for pod list to return data ...
	I0812 11:57:20.949153   59908 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:57:20.951355   59908 default_sa.go:45] found service account: "default"
	I0812 11:57:20.951376   59908 default_sa.go:55] duration metric: took 2.217928ms for default service account to be created ...
	I0812 11:57:20.951383   59908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:57:20.956479   59908 system_pods.go:86] 8 kube-system pods found
	I0812 11:57:20.956505   59908 system_pods.go:89] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.956513   59908 system_pods.go:89] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.956519   59908 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.956527   59908 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.956532   59908 system_pods.go:89] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.956537   59908 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.956546   59908 system_pods.go:89] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.956553   59908 system_pods.go:89] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.956564   59908 system_pods.go:126] duration metric: took 5.175002ms to wait for k8s-apps to be running ...
	I0812 11:57:20.956572   59908 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:57:20.956624   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:57:20.971826   59908 system_svc.go:56] duration metric: took 15.246626ms WaitForService to wait for kubelet
	I0812 11:57:20.971856   59908 kubeadm.go:582] duration metric: took 4m23.633490244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:57:20.971881   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:57:20.974643   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:57:20.974661   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:57:20.974671   59908 node_conditions.go:105] duration metric: took 2.785ms to run NodePressure ...
	I0812 11:57:20.974681   59908 start.go:241] waiting for startup goroutines ...
	I0812 11:57:20.974688   59908 start.go:246] waiting for cluster config update ...
	I0812 11:57:20.974700   59908 start.go:255] writing updated cluster config ...
	I0812 11:57:20.975043   59908 ssh_runner.go:195] Run: rm -f paused
	I0812 11:57:21.025000   59908 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:57:21.028153   59908 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-581883" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.762114319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463923760781036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c372ea4-df77-4b41-8b62-2ea73ef0166a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.762990546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dee552b-884b-4d49-bb55-5d7efaee15f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.763103451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dee552b-884b-4d49-bb55-5d7efaee15f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.763306943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dee552b-884b-4d49-bb55-5d7efaee15f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.801105562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07a34fa2-be2a-4026-acc6-d1ee4aa6c2dd name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.801204036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07a34fa2-be2a-4026-acc6-d1ee4aa6c2dd name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.802272277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3714ddd-79e5-4f20-9d1e-17955c96da5a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.802666901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463923802645211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3714ddd-79e5-4f20-9d1e-17955c96da5a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.803325211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca4b2d11-5daa-44f6-a72e-7e12166ed127 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.803393562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca4b2d11-5daa-44f6-a72e-7e12166ed127 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.803586441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca4b2d11-5daa-44f6-a72e-7e12166ed127 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.844233790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ed0f5f1-b482-4203-8daf-47671f16c644 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.844328536Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ed0f5f1-b482-4203-8daf-47671f16c644 name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.845711487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7eca1694-0c6c-4e0f-ba34-35150e3f66ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.846344804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463923846317395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7eca1694-0c6c-4e0f-ba34-35150e3f66ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.846931609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cad447ca-c6ef-4512-aadd-1c4bcca93a70 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.847012823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cad447ca-c6ef-4512-aadd-1c4bcca93a70 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.847271792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cad447ca-c6ef-4512-aadd-1c4bcca93a70 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.879613923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf13563f-f275-46f7-8e0f-85ccec65c8fb name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.879688584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf13563f-f275-46f7-8e0f-85ccec65c8fb name=/runtime.v1.RuntimeService/Version
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.881177742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efec18dc-3513-4bc6-9c22-47304dd863b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.881588515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723463923881565597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efec18dc-3513-4bc6-9c22-47304dd863b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.881973857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4afb58eb-0659-47b4-a3f5-4935d81158d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.882173676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4afb58eb-0659-47b4-a3f5-4935d81158d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 11:58:43 embed-certs-093615 crio[725]: time="2024-08-12 11:58:43.882382125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4afb58eb-0659-47b4-a3f5-4935d81158d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddd1e160b6318       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3e24d404dc9fd       coredns-7db6d8ff4d-cjbwn
	d4d2283db2642       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8e58e817dfe1e       coredns-7db6d8ff4d-zcpcc
	9b995fc3e6be3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c0db6336dcd60       storage-provisioner
	116de1fd0f81f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   b15dac4a46926       kube-proxy-26xvl
	5c50823884ae4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   3f91dcb6e0109       etcd-embed-certs-093615
	81a99ad0a2faa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   d480a7755d151       kube-apiserver-embed-certs-093615
	3de12c4bbaca1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   971dd05803062       kube-scheduler-embed-certs-093615
	c719e81534f0e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   3c0c4462fd4eb       kube-controller-manager-embed-certs-093615
	
	
	==> coredns [d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-093615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-093615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=embed-certs-093615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:49:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-093615
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 11:58:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:54:49 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:54:49 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:54:49 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:54:49 +0000   Mon, 12 Aug 2024 11:49:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.191
	  Hostname:    embed-certs-093615
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f7733c219f4141bc1cc7d55f20a08a
	  System UUID:                10f7733c-219f-4141-bc1c-c7d55f20a08a
	  Boot ID:                    52319191-26f0-4bd5-85ad-e38640b2e855
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cjbwn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-zcpcc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-093615                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-093615             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-093615    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-26xvl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-093615             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-kwk6t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-093615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-093615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node embed-certs-093615 event: Registered Node embed-certs-093615 in Controller
	
	
	==> dmesg <==
	[  +0.055959] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.027249] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.144775] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.627689] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.547381] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067174] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.169367] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.150980] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.289404] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.568962] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.070215] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.076799] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +4.658866] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.716386] kauditd_printk_skb: 79 callbacks suppressed
	[Aug12 11:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.635053] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +6.060865] systemd-fstab-generator[3896]: Ignoring "noauto" option for root device
	[  +0.072476] kauditd_printk_skb: 57 callbacks suppressed
	[ +14.321167] systemd-fstab-generator[4094]: Ignoring "noauto" option for root device
	[  +0.116619] kauditd_printk_skb: 12 callbacks suppressed
	[Aug12 11:50] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c] <==
	{"level":"info","ts":"2024-08-12T11:49:18.942948Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T11:49:18.943234Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"13882d9d804521e5","local-member-id":"457fa619cab3a8e","added-peer-id":"457fa619cab3a8e","added-peer-peer-urls":["https://192.168.72.191:2380"]}
	{"level":"info","ts":"2024-08-12T11:49:18.9433Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"457fa619cab3a8e","initial-advertise-peer-urls":["https://192.168.72.191:2380"],"listen-peer-urls":["https://192.168.72.191:2380"],"advertise-client-urls":["https://192.168.72.191:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.191:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T11:49:18.943451Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T11:49:18.944525Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.191:2380"}
	{"level":"info","ts":"2024-08-12T11:49:18.949086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.191:2380"}
	{"level":"info","ts":"2024-08-12T11:49:19.449094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:19.449143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:19.449176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e received MsgPreVoteResp from 457fa619cab3a8e at term 1"}
	{"level":"info","ts":"2024-08-12T11:49:19.44919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e became candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e received MsgVoteResp from 457fa619cab3a8e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e became leader at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457fa619cab3a8e elected leader 457fa619cab3a8e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.453301Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"457fa619cab3a8e","local-member-attributes":"{Name:embed-certs-093615 ClientURLs:[https://192.168.72.191:2379]}","request-path":"/0/members/457fa619cab3a8e/attributes","cluster-id":"13882d9d804521e5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:49:19.453436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:19.453805Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.453965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:19.462171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:19.462763Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:19.462291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.191:2379"}
	{"level":"info","ts":"2024-08-12T11:49:19.464243Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"13882d9d804521e5","local-member-id":"457fa619cab3a8e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.464405Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.464447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.48048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:52:10.148388Z","caller":"traceutil/trace.go:171","msg":"trace[693313145] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"126.624487ms","start":"2024-08-12T11:52:10.021717Z","end":"2024-08-12T11:52:10.148341Z","steps":["trace[693313145] 'process raft request'  (duration: 126.440189ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:58:44 up 14 min,  0 users,  load average: 0.22, 0.34, 0.25
	Linux embed-certs-093615 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a] <==
	I0812 11:52:39.566251       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:54:20.974649       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:54:20.974919       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0812 11:54:21.975604       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:54:21.975726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 11:54:21.975757       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:54:21.975930       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:54:21.976094       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 11:54:21.977315       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:55:21.976136       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:55:21.976296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 11:55:21.976328       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:55:21.977726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:55:21.977826       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 11:55:21.977859       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:57:21.977107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:57:21.977186       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 11:57:21.977199       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 11:57:21.978375       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 11:57:21.978444       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 11:57:21.978450       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e] <==
	I0812 11:53:10.490308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="67.358µs"
	E0812 11:53:37.595706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:53:38.053652       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:54:07.602783       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:54:08.063177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:54:37.609258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:54:38.071411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:55:07.614594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:55:08.079121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:55:37.622952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:55:38.086874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 11:55:39.490594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="337.968µs"
	I0812 11:55:52.485726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="153.334µs"
	E0812 11:56:07.629255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:56:08.097297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:56:37.635694       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:56:38.104963       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:57:07.641765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:57:08.113088       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:57:37.647604       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:57:38.121683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:58:07.653166       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:58:08.130245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:58:37.658667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:58:38.138646       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779] <==
	I0812 11:49:38.480795       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:49:38.496197       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.191"]
	I0812 11:49:38.632332       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:49:38.632416       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:49:38.632434       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:49:38.635272       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:49:38.635637       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:49:38.635667       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:49:38.637117       1 config.go:192] "Starting service config controller"
	I0812 11:49:38.637226       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:49:38.637269       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:49:38.637302       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:49:38.642847       1 config.go:319] "Starting node config controller"
	I0812 11:49:38.642887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:49:38.737474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:49:38.737551       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:49:38.743682       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f] <==
	E0812 11:49:21.006567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:21.006724       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:21.006752       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 11:49:21.006795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.006818       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.006848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:49:21.842224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 11:49:21.842273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 11:49:21.919155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.919223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.131771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:22.131886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.145738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:49:22.145892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:49:22.273284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:49:22.273391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:49:22.300314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:49:22.300899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:49:22.327790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:49:22.327915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 11:49:22.330554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:22.330666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.440276       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:22.440377       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 11:49:24.395710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 11:56:23 embed-certs-093615 kubelet[3903]: E0812 11:56:23.495946    3903 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:56:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:56:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:56:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:56:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:56:36 embed-certs-093615 kubelet[3903]: E0812 11:56:36.471149    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:56:47 embed-certs-093615 kubelet[3903]: E0812 11:56:47.471357    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:57:01 embed-certs-093615 kubelet[3903]: E0812 11:57:01.472280    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:57:15 embed-certs-093615 kubelet[3903]: E0812 11:57:15.471941    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:57:23 embed-certs-093615 kubelet[3903]: E0812 11:57:23.495843    3903 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:57:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:57:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:57:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:57:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:57:27 embed-certs-093615 kubelet[3903]: E0812 11:57:27.472718    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:57:38 embed-certs-093615 kubelet[3903]: E0812 11:57:38.471710    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:57:53 embed-certs-093615 kubelet[3903]: E0812 11:57:53.471955    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:58:06 embed-certs-093615 kubelet[3903]: E0812 11:58:06.471491    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:58:19 embed-certs-093615 kubelet[3903]: E0812 11:58:19.473012    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 11:58:23 embed-certs-093615 kubelet[3903]: E0812 11:58:23.498508    3903 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 11:58:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 11:58:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 11:58:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 11:58:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 11:58:32 embed-certs-093615 kubelet[3903]: E0812 11:58:32.471862    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	
	
	==> storage-provisioner [9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98] <==
	I0812 11:49:39.171735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:49:39.191105       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:49:39.191229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:49:39.208649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:49:39.210099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fcba3c9-31aa-44e8-bdf8-fdb149899bc1", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c became leader
	I0812 11:49:39.210202       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c!
	I0812 11:49:39.311319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-093615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-kwk6t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t: exit status 1 (65.033202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-kwk6t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
E0812 11:53:30.976104   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
E0812 11:55:45.936319   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
E0812 11:56:34.024572   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
E0812 11:58:30.975401   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (236.874675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-835962" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (226.021086ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-835962 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:46:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:46:59.013199   59908 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:46:59.013476   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013486   59908 out.go:304] Setting ErrFile to fd 2...
	I0812 11:46:59.013490   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013689   59908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:46:59.014204   59908 out.go:298] Setting JSON to false
	I0812 11:46:59.015302   59908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5360,"bootTime":1723457859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:46:59.015368   59908 start.go:139] virtualization: kvm guest
	I0812 11:46:59.017512   59908 out.go:177] * [default-k8s-diff-port-581883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:46:59.018833   59908 notify.go:220] Checking for updates...
	I0812 11:46:59.018859   59908 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:46:59.020251   59908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:46:59.021646   59908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:46:59.022806   59908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:46:59.024110   59908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:46:59.025476   59908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:46:59.027290   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:46:59.027911   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.028000   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.042960   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0812 11:46:59.043506   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.044010   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.044038   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.044357   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.044528   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.044791   59908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:46:59.045201   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.045244   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.060824   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0812 11:46:59.061268   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.061747   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.061775   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.062156   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.062346   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.101403   59908 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:46:59.102677   59908 start.go:297] selected driver: kvm2
	I0812 11:46:59.102698   59908 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.102863   59908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:46:59.103621   59908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.103690   59908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:46:59.119409   59908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:46:59.119785   59908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:46:59.119848   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:46:59.119862   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:46:59.119900   59908 start.go:340] cluster config:
	{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.120006   59908 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.121814   59908 out.go:177] * Starting "default-k8s-diff-port-581883" primary control-plane node in "default-k8s-diff-port-581883" cluster
	I0812 11:46:59.123067   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:46:59.123111   59908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:46:59.123124   59908 cache.go:56] Caching tarball of preloaded images
	I0812 11:46:59.123213   59908 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:46:59.123228   59908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:46:59.123315   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:46:59.123508   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:46:59.123549   59908 start.go:364] duration metric: took 23.58µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:46:59.123562   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:46:59.123569   59908 fix.go:54] fixHost starting: 
	I0812 11:46:59.123822   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.123852   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.138741   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0812 11:46:59.139136   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.139611   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.139638   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.139938   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.140109   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.140220   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:46:59.141738   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Running err=<nil>
	W0812 11:46:59.141754   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:46:59.143728   59908 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-581883" VM ...
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:56.296673   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:58.297248   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.796584   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.182029   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:02.182240   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:59.144895   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:46:59.144926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.145161   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:46:59.147827   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148278   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:46:59.148305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:46:59.148645   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148820   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148953   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:46:59.149111   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:46:59.149331   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:46:59.149345   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:47:02.045251   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:03.294934   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.295154   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:04.183393   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:06.682752   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.121108   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:07.296302   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.296695   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.182760   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.682727   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.197110   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:11.795381   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:13.795932   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.183038   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:16.183071   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.273141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.297007   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.796402   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.186341   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.682850   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:22.683087   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.349117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:23.421182   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:21.295000   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:23.295712   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.296884   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.183687   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:27.683615   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:27.795828   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.796889   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.683959   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.183115   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.541112   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:47:32.295992   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.795262   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.183370   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:36.682661   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:35.613159   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:36.796467   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.295851   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.182790   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.183829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.693116   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:41.795257   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.795510   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.795595   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.681967   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.684043   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:44.765178   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:48.296050   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.796799   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:48.181748   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.182360   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:52.682975   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.845098   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.917138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.299038   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.796462   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.183044   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:57.685262   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:58.295509   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.795668   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.182427   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:02.682842   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:59.997094   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.069083   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.296463   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.795306   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.182884   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.682408   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.796147   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.296184   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.182124   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:12.182757   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:09.149157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.221135   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.296827   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.796551   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.682524   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:16.682657   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.301111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:17.295545   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:19.295850   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.688121   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.182277   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.373181   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:21.297142   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.798497   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.182636   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:25.682702   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.682936   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.453111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:26.295505   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:28.296105   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.796925   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:29.688759   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:32.182416   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.525184   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:33.295379   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:35.296605   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:34.183273   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.682829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.605187   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:37.796023   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:38.789570   57616 pod_ready.go:81] duration metric: took 4m0.000355544s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:38.789615   57616 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:38.789648   57616 pod_ready.go:38] duration metric: took 4m11.040926567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:38.789687   57616 kubeadm.go:597] duration metric: took 4m21.131138259s to restartPrimaryControlPlane
	W0812 11:48:38.789757   57616 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:38.789794   57616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:38.683163   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:40.683334   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:39.677106   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:43.182845   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:44.677001   56845 pod_ready.go:81] duration metric: took 4m0.0007218s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:44.677024   56845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:44.677041   56845 pod_ready.go:38] duration metric: took 4m12.037310023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:44.677065   56845 kubeadm.go:597] duration metric: took 4m19.591323336s to restartPrimaryControlPlane
	W0812 11:48:44.677114   56845 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:44.677137   56845 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:45.757157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:48.829146   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:54.909142   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:57.981079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:04.870417   57616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.080589185s)
	I0812 11:49:04.870490   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:04.897963   57616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:04.912211   57616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:04.933833   57616 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:04.933861   57616 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:04.933915   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:04.946673   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:04.946756   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:04.960851   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:04.989181   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:04.989259   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:05.002989   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.012600   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:05.012673   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.022301   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:05.031680   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:05.031761   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:05.041453   57616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:05.087039   57616 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:49:05.087106   57616 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:05.195646   57616 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:05.195788   57616 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:05.195909   57616 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:49:05.204565   57616 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:05.207373   57616 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:05.207481   57616 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:05.207573   57616 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:05.207696   57616 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:05.207792   57616 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:05.207896   57616 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:05.207995   57616 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:05.208103   57616 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:05.208195   57616 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:05.208296   57616 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:05.208401   57616 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:05.208456   57616 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:05.208531   57616 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:05.368644   57616 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:05.523403   57616 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:05.656177   57616 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:05.786141   57616 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:05.945607   57616 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:05.946201   57616 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:05.948940   57616 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:05.950857   57616 out.go:204]   - Booting up control plane ...
	I0812 11:49:05.950970   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:05.951060   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:05.952093   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:05.971023   57616 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:05.978207   57616 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:05.978421   57616 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:06.109216   57616 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:06.109362   57616 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:49:04.061117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.133143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.110595   57616 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001459707s
	I0812 11:49:07.110732   57616 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:12.112776   57616 kubeadm.go:310] [api-check] The API server is healthy after 5.002008667s
	I0812 11:49:12.126637   57616 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:12.141115   57616 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:12.166337   57616 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:12.166727   57616 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-993542 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:12.180548   57616 kubeadm.go:310] [bootstrap-token] Using token: jiwh9x.y6rsv6xjvwdwkbct
	I0812 11:49:12.182174   57616 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:12.182276   57616 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:12.191053   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:12.203294   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:12.208858   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:12.215501   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:12.227747   57616 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:12.520136   57616 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:12.964503   57616 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:13.523969   57616 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:13.524831   57616 kubeadm.go:310] 
	I0812 11:49:13.524954   57616 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:13.524973   57616 kubeadm.go:310] 
	I0812 11:49:13.525098   57616 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:13.525113   57616 kubeadm.go:310] 
	I0812 11:49:13.525147   57616 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:13.525220   57616 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:13.525311   57616 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:13.525325   57616 kubeadm.go:310] 
	I0812 11:49:13.525411   57616 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:13.525420   57616 kubeadm.go:310] 
	I0812 11:49:13.525489   57616 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:13.525503   57616 kubeadm.go:310] 
	I0812 11:49:13.525572   57616 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:13.525690   57616 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:13.525780   57616 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:13.525790   57616 kubeadm.go:310] 
	I0812 11:49:13.525905   57616 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:13.526000   57616 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:13.526011   57616 kubeadm.go:310] 
	I0812 11:49:13.526119   57616 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526271   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:13.526307   57616 kubeadm.go:310] 	--control-plane 
	I0812 11:49:13.526317   57616 kubeadm.go:310] 
	I0812 11:49:13.526420   57616 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:13.526429   57616 kubeadm.go:310] 
	I0812 11:49:13.526527   57616 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526653   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:13.527630   57616 kubeadm.go:310] W0812 11:49:05.056260    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528000   57616 kubeadm.go:310] W0812 11:49:05.058135    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528149   57616 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:13.528175   57616 cni.go:84] Creating CNI manager for ""
	I0812 11:49:13.528189   57616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:13.529938   57616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:13.213137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:13.531443   57616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:13.542933   57616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:13.562053   57616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:13.562181   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:13.562196   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-993542 minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=no-preload-993542 minikube.k8s.io/primary=true
	I0812 11:49:13.764006   57616 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:13.764145   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.264728   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.764225   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.264599   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.764919   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.943701   56845 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.266539018s)
	I0812 11:49:15.943778   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:15.959746   56845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:15.970630   56845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:15.980712   56845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:15.980729   56845 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:15.980775   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:15.990070   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:15.990133   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:15.999602   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:16.008767   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:16.008855   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:16.019564   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.028585   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:16.028660   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.037916   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:16.047028   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:16.047087   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:16.056780   56845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:16.104764   56845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:49:16.104848   56845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:16.239085   56845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:16.239218   56845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:16.239309   56845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:16.456581   56845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:16.458619   56845 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:16.458731   56845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:16.458805   56845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:16.458927   56845 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:16.459037   56845 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:16.459121   56845 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:16.459191   56845 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:16.459281   56845 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:16.459385   56845 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:16.459469   56845 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:16.459569   56845 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:16.459643   56845 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:16.459734   56845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:16.579477   56845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:16.765880   56845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:16.885469   56845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:16.955885   56845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:17.091576   56845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:17.092005   56845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:17.094454   56845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:17.096720   56845 out.go:204]   - Booting up control plane ...
	I0812 11:49:17.096850   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:17.096976   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:17.098357   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:17.115656   56845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:17.116069   56845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:17.116128   56845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:17.256475   56845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:17.256550   56845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:49:17.758741   56845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271569ms
	I0812 11:49:17.758818   56845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:16.264606   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:16.764905   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.264989   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.765205   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.265008   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.380060   57616 kubeadm.go:1113] duration metric: took 4.817945872s to wait for elevateKubeSystemPrivileges
	I0812 11:49:18.380107   57616 kubeadm.go:394] duration metric: took 5m0.782175026s to StartCluster
	I0812 11:49:18.380131   57616 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.380237   57616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:18.382942   57616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.383329   57616 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:18.383406   57616 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:18.383564   57616 addons.go:69] Setting storage-provisioner=true in profile "no-preload-993542"
	I0812 11:49:18.383573   57616 addons.go:69] Setting default-storageclass=true in profile "no-preload-993542"
	I0812 11:49:18.383603   57616 addons.go:234] Setting addon storage-provisioner=true in "no-preload-993542"
	W0812 11:49:18.383618   57616 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:18.383620   57616 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:49:18.383634   57616 addons.go:69] Setting metrics-server=true in profile "no-preload-993542"
	I0812 11:49:18.383653   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.383621   57616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-993542"
	I0812 11:49:18.383662   57616 addons.go:234] Setting addon metrics-server=true in "no-preload-993542"
	W0812 11:49:18.383674   57616 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:18.383708   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.384042   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384072   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384089   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384117   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384181   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384211   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.386531   57616 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:18.388412   57616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:18.404269   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0812 11:49:18.404302   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0812 11:49:18.404279   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0812 11:49:18.405011   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405062   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405012   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405601   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405603   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405621   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405636   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405743   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405769   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.406150   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406174   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406184   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406762   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.406786   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.407101   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.407395   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.407420   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.411782   57616 addons.go:234] Setting addon default-storageclass=true in "no-preload-993542"
	W0812 11:49:18.411813   57616 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:18.411843   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.412202   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.412241   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.428999   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0812 11:49:18.429469   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430064   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.430087   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.430147   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0812 11:49:18.430442   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.430500   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430762   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.431525   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.431539   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.431950   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.432152   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.432474   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0812 11:49:18.432876   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.433599   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.433618   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.433872   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434119   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.434381   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434819   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.434875   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.436590   57616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:18.436703   57616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:16.285160   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:18.438442   57616 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.438466   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:18.438489   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.438698   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:18.438713   57616 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:18.438731   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.443927   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.443965   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444276   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444315   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444373   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.444614   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.444790   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444824   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444851   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445055   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.445427   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.445624   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.445776   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445938   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.457462   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 11:49:18.457995   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.458573   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.458602   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.459048   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.459315   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.461486   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.461753   57616 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.461770   57616 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:18.461788   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.465243   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465776   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.465803   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465981   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.466172   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.466325   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.466478   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.649285   57616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:18.666240   57616 node_ready.go:35] waiting up to 6m0s for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675741   57616 node_ready.go:49] node "no-preload-993542" has status "Ready":"True"
	I0812 11:49:18.675769   57616 node_ready.go:38] duration metric: took 9.489483ms for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675781   57616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:18.687934   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:18.762652   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.769504   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:18.769533   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:18.801182   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.815215   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:18.815249   57616 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:18.869830   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:18.869856   57616 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:18.943609   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:19.326108   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326145   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326183   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326200   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326517   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326543   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326558   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326571   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326577   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326580   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326586   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326588   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326597   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326598   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326969   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326997   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327005   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.327232   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327247   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.349315   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.349341   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.349693   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.349737   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.349746   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.620732   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.620765   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621097   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.621143   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621160   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621170   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.621182   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621446   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621469   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621481   57616 addons.go:475] Verifying addon metrics-server=true in "no-preload-993542"
	I0812 11:49:19.624757   57616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:19.626510   57616 addons.go:510] duration metric: took 1.243102289s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:20.695552   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:22.762626   56845 kubeadm.go:310] [api-check] The API server is healthy after 5.002108915s
	I0812 11:49:22.782365   56845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:22.794869   56845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:22.829058   56845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:22.829314   56845 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-093615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:22.842722   56845 kubeadm.go:310] [bootstrap-token] Using token: e42mo3.61s6ofjvy51u5vh7
	I0812 11:49:22.844590   56845 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:22.844745   56845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:22.851804   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:22.861419   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:22.866597   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:22.870810   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:22.886117   56845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:22.365060   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:23.168156   56845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:23.612002   56845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:24.170270   56845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:24.171014   56845 kubeadm.go:310] 
	I0812 11:49:24.171076   56845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:24.171084   56845 kubeadm.go:310] 
	I0812 11:49:24.171146   56845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:24.171153   56845 kubeadm.go:310] 
	I0812 11:49:24.171204   56845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:24.171801   56845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:24.171846   56845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:24.171853   56845 kubeadm.go:310] 
	I0812 11:49:24.171954   56845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:24.171975   56845 kubeadm.go:310] 
	I0812 11:49:24.172039   56845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:24.172051   56845 kubeadm.go:310] 
	I0812 11:49:24.172125   56845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:24.172247   56845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:24.172360   56845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:24.172378   56845 kubeadm.go:310] 
	I0812 11:49:24.172498   56845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:24.172601   56845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:24.172611   56845 kubeadm.go:310] 
	I0812 11:49:24.172772   56845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.172908   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:24.172944   56845 kubeadm.go:310] 	--control-plane 
	I0812 11:49:24.172953   56845 kubeadm.go:310] 
	I0812 11:49:24.173063   56845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:24.173073   56845 kubeadm.go:310] 
	I0812 11:49:24.173209   56845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.173363   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:24.173919   56845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:24.173990   56845 cni.go:84] Creating CNI manager for ""
	I0812 11:49:24.174008   56845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:24.176549   56845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:22.696279   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.194695   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.695175   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:25.695199   57616 pod_ready.go:81] duration metric: took 7.007233179s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.695209   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:24.177809   56845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:24.188175   56845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:24.208060   56845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:24.208152   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.208209   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-093615 minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=embed-certs-093615 minikube.k8s.io/primary=true
	I0812 11:49:24.393211   56845 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:24.393296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.894092   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.394229   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.893667   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.394057   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.893509   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.394296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.893453   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.441104   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:27.702461   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:28.202582   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.202608   57616 pod_ready.go:81] duration metric: took 2.507391262s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.202621   57616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207529   57616 pod_ready.go:92] pod "etcd-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.207551   57616 pod_ready.go:81] duration metric: took 4.923206ms for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207560   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212760   57616 pod_ready.go:92] pod "kube-apiserver-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.212794   57616 pod_ready.go:81] duration metric: took 5.223592ms for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212807   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.216970   57616 pod_ready.go:92] pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.216993   57616 pod_ready.go:81] duration metric: took 4.177186ms for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.217004   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221078   57616 pod_ready.go:92] pod "kube-proxy-8jwkz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.221096   57616 pod_ready.go:81] duration metric: took 4.085629ms for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221105   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600004   57616 pod_ready.go:92] pod "kube-scheduler-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.600031   57616 pod_ready.go:81] duration metric: took 378.92044ms for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600039   57616 pod_ready.go:38] duration metric: took 9.924247425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:28.600053   57616 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:28.600102   57616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:28.615007   57616 api_server.go:72] duration metric: took 10.231634381s to wait for apiserver process to appear ...
	I0812 11:49:28.615043   57616 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:28.615063   57616 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8443/healthz ...
	I0812 11:49:28.620301   57616 api_server.go:279] https://192.168.61.148:8443/healthz returned 200:
	ok
	I0812 11:49:28.621814   57616 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:49:28.621843   57616 api_server.go:131] duration metric: took 6.792657ms to wait for apiserver health ...
	I0812 11:49:28.621858   57616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:28.804172   57616 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:28.804204   57616 system_pods.go:61] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:28.804208   57616 system_pods.go:61] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:28.804213   57616 system_pods.go:61] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:28.804216   57616 system_pods.go:61] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:28.804219   57616 system_pods.go:61] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:28.804224   57616 system_pods.go:61] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:28.804227   57616 system_pods.go:61] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:28.804232   57616 system_pods.go:61] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:28.804236   57616 system_pods.go:61] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:28.804244   57616 system_pods.go:74] duration metric: took 182.379622ms to wait for pod list to return data ...
	I0812 11:49:28.804251   57616 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:28.999537   57616 default_sa.go:45] found service account: "default"
	I0812 11:49:28.999571   57616 default_sa.go:55] duration metric: took 195.31354ms for default service account to be created ...
	I0812 11:49:28.999582   57616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:29.205266   57616 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:29.205296   57616 system_pods.go:89] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:29.205301   57616 system_pods.go:89] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:29.205306   57616 system_pods.go:89] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:29.205310   57616 system_pods.go:89] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:29.205315   57616 system_pods.go:89] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:29.205319   57616 system_pods.go:89] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:29.205323   57616 system_pods.go:89] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:29.205329   57616 system_pods.go:89] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:29.205335   57616 system_pods.go:89] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:29.205342   57616 system_pods.go:126] duration metric: took 205.754437ms to wait for k8s-apps to be running ...
	I0812 11:49:29.205348   57616 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:29.205390   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:29.220297   57616 system_svc.go:56] duration metric: took 14.940181ms WaitForService to wait for kubelet
	I0812 11:49:29.220343   57616 kubeadm.go:582] duration metric: took 10.836962086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:29.220369   57616 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:29.400598   57616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:29.400634   57616 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:29.400648   57616 node_conditions.go:105] duration metric: took 180.272764ms to run NodePressure ...
	I0812 11:49:29.400663   57616 start.go:241] waiting for startup goroutines ...
	I0812 11:49:29.400675   57616 start.go:246] waiting for cluster config update ...
	I0812 11:49:29.400691   57616 start.go:255] writing updated cluster config ...
	I0812 11:49:29.401086   57616 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:29.454975   57616 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:49:29.457349   57616 out.go:177] * Done! kubectl is now configured to use "no-preload-993542" cluster and "default" namespace by default
	I0812 11:49:28.394104   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:28.894284   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.393380   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.893417   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.394034   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.893668   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.394322   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.894069   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.393691   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.893944   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.517192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:33.393880   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:33.894126   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.393857   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.893356   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.394181   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.894116   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.393690   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.893650   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.394325   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.524187   56845 kubeadm.go:1113] duration metric: took 13.316085022s to wait for elevateKubeSystemPrivileges
	I0812 11:49:37.524225   56845 kubeadm.go:394] duration metric: took 5m12.500523071s to StartCluster
	I0812 11:49:37.524246   56845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.524334   56845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:37.526822   56845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.527125   56845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.191 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:37.527189   56845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:37.527272   56845 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-093615"
	I0812 11:49:37.527285   56845 addons.go:69] Setting default-storageclass=true in profile "embed-certs-093615"
	I0812 11:49:37.527307   56845 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-093615"
	I0812 11:49:37.527307   56845 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0812 11:49:37.527315   56845 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:37.527318   56845 addons.go:69] Setting metrics-server=true in profile "embed-certs-093615"
	I0812 11:49:37.527337   56845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-093615"
	I0812 11:49:37.527345   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527362   56845 addons.go:234] Setting addon metrics-server=true in "embed-certs-093615"
	W0812 11:49:37.527375   56845 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:37.527413   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527791   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527816   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527798   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527928   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.528806   56845 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:37.530366   56845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:37.544367   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0812 11:49:37.544919   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0812 11:49:37.545052   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545492   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545535   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.545551   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546095   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.546220   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.546247   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546267   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.547090   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.547667   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.547697   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.548008   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0812 11:49:37.550024   56845 addons.go:234] Setting addon default-storageclass=true in "embed-certs-093615"
	W0812 11:49:37.550048   56845 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:37.550079   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.550469   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.550500   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.550728   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.551342   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.551373   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.551748   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.552314   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.552354   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.566505   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0812 11:49:37.567085   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.567510   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.567526   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.567900   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.568133   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.570307   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.571789   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0812 11:49:37.572127   56845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:37.572191   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.572730   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.572752   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.573044   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0812 11:49:37.573231   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.573619   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.573815   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.573840   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.573849   56845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.573870   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:37.573890   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.574787   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.574809   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.575722   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.575937   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.578054   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578069   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.578536   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.578565   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578833   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.579012   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.579170   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.579326   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.580007   56845 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:37.581298   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:37.581313   56845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:37.581334   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.585114   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585809   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.585839   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585914   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.586160   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.586338   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.586476   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.591678   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0812 11:49:37.592146   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.592684   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.592702   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.593075   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.593241   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.595117   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.595398   56845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.595413   56845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:37.595430   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.598417   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.598771   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.598792   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.599008   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.599209   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.599369   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.599507   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.757714   56845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:37.783594   56845 node_ready.go:35] waiting up to 6m0s for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801679   56845 node_ready.go:49] node "embed-certs-093615" has status "Ready":"True"
	I0812 11:49:37.801707   56845 node_ready.go:38] duration metric: took 18.078817ms for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801719   56845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:37.814704   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:37.860064   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.913642   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:37.913673   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:37.932638   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.948027   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:37.948052   56845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:38.000773   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.000805   56845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:38.050478   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.655431   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655458   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655477   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655460   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655760   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655875   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655888   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655897   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655792   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655971   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655979   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655986   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655812   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.655832   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656156   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656161   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656172   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.656199   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656225   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656231   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707240   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.707268   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.707596   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.707618   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707667   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.832725   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.832758   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833072   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833114   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833134   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833155   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.833165   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833416   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833461   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833472   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833483   56845 addons.go:475] Verifying addon metrics-server=true in "embed-certs-093615"
	I0812 11:49:38.835319   56845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:34.589171   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:38.836977   56845 addons.go:510] duration metric: took 1.309786928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:39.827672   56845 pod_ready.go:102] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:40.820793   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.820818   56845 pod_ready.go:81] duration metric: took 3.006078866s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.820828   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825674   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.825696   56845 pod_ready.go:81] duration metric: took 4.862671ms for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825705   56845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830668   56845 pod_ready.go:92] pod "etcd-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.830690   56845 pod_ready.go:81] duration metric: took 4.979449ms for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830699   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834732   56845 pod_ready.go:92] pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.834750   56845 pod_ready.go:81] duration metric: took 4.044023ms for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834759   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838476   56845 pod_ready.go:92] pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.838493   56845 pod_ready.go:81] duration metric: took 3.728686ms for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838502   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219756   56845 pod_ready.go:92] pod "kube-proxy-26xvl" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.219778   56845 pod_ready.go:81] duration metric: took 381.271425ms for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219789   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619078   56845 pod_ready.go:92] pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.619107   56845 pod_ready.go:81] duration metric: took 399.30989ms for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619117   56845 pod_ready.go:38] duration metric: took 3.817386457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:41.619135   56845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:41.619197   56845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:41.634452   56845 api_server.go:72] duration metric: took 4.107285578s to wait for apiserver process to appear ...
	I0812 11:49:41.634480   56845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:41.634505   56845 api_server.go:253] Checking apiserver healthz at https://192.168.72.191:8443/healthz ...
	I0812 11:49:41.639610   56845 api_server.go:279] https://192.168.72.191:8443/healthz returned 200:
	ok
	I0812 11:49:41.640514   56845 api_server.go:141] control plane version: v1.30.3
	I0812 11:49:41.640537   56845 api_server.go:131] duration metric: took 6.049802ms to wait for apiserver health ...
	I0812 11:49:41.640547   56845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:41.823614   56845 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:41.823652   56845 system_pods.go:61] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:41.823659   56845 system_pods.go:61] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:41.823665   56845 system_pods.go:61] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:41.823670   56845 system_pods.go:61] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:41.823675   56845 system_pods.go:61] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:41.823680   56845 system_pods.go:61] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:41.823685   56845 system_pods.go:61] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:41.823693   56845 system_pods.go:61] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:41.823697   56845 system_pods.go:61] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:41.823704   56845 system_pods.go:74] duration metric: took 183.151482ms to wait for pod list to return data ...
	I0812 11:49:41.823711   56845 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:42.017840   56845 default_sa.go:45] found service account: "default"
	I0812 11:49:42.017870   56845 default_sa.go:55] duration metric: took 194.151916ms for default service account to be created ...
	I0812 11:49:42.017886   56845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:42.222050   56845 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:42.222084   56845 system_pods.go:89] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:42.222092   56845 system_pods.go:89] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:42.222098   56845 system_pods.go:89] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:42.222104   56845 system_pods.go:89] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:42.222110   56845 system_pods.go:89] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:42.222116   56845 system_pods.go:89] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:42.222122   56845 system_pods.go:89] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:42.222133   56845 system_pods.go:89] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:42.222140   56845 system_pods.go:89] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:42.222157   56845 system_pods.go:126] duration metric: took 204.263322ms to wait for k8s-apps to be running ...
	I0812 11:49:42.222169   56845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:42.222224   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:42.235891   56845 system_svc.go:56] duration metric: took 13.715083ms WaitForService to wait for kubelet
	I0812 11:49:42.235920   56845 kubeadm.go:582] duration metric: took 4.708757648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:42.235945   56845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:42.418727   56845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:42.418761   56845 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:42.418773   56845 node_conditions.go:105] duration metric: took 182.823582ms to run NodePressure ...
	I0812 11:49:42.418789   56845 start.go:241] waiting for startup goroutines ...
	I0812 11:49:42.418799   56845 start.go:246] waiting for cluster config update ...
	I0812 11:49:42.418812   56845 start.go:255] writing updated cluster config ...
	I0812 11:49:42.419150   56845 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:42.468981   56845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:49:42.471931   56845 out.go:177] * Done! kubectl is now configured to use "embed-certs-093615" cluster and "default" namespace by default
	I0812 11:49:40.669207   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:43.741090   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:49.821138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:52.893281   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:58.973141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:02.045165   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:08.129133   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:11.197137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:17.277119   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:20.349149   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:26.429100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:29.501158   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:35.581137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:38.653143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:44.733130   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:47.805192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:53.885100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:56.957154   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:03.037201   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:06.109079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:12.189138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:15.261132   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 
	I0812 11:51:21.341126   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:24.413107   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:30.493143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:33.569122   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:36.569554   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:51:36.569591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.569943   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:51:36.569973   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.570201   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:51:36.571680   59908 machine.go:97] duration metric: took 4m37.426765365s to provisionDockerMachine
	I0812 11:51:36.571724   59908 fix.go:56] duration metric: took 4m37.448153773s for fixHost
	I0812 11:51:36.571736   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 4m37.448177825s
	W0812 11:51:36.571759   59908 start.go:714] error starting host: provision: host is not running
	W0812 11:51:36.571863   59908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0812 11:51:36.571879   59908 start.go:729] Will try again in 5 seconds ...
	I0812 11:51:41.573924   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:51:41.574052   59908 start.go:364] duration metric: took 85.852µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:51:41.574082   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:51:41.574092   59908 fix.go:54] fixHost starting: 
	I0812 11:51:41.574362   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:51:41.574405   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:51:41.589947   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0812 11:51:41.590440   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:51:41.590917   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:51:41.590937   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:51:41.591264   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:51:41.591434   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:51:41.591577   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:51:41.593079   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Stopped err=<nil>
	I0812 11:51:41.593104   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	W0812 11:51:41.593250   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:51:41.595246   59908 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-581883" ...
	I0812 11:51:41.596770   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Start
	I0812 11:51:41.596979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring networks are active...
	I0812 11:51:41.598006   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network default is active
	I0812 11:51:41.598500   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network mk-default-k8s-diff-port-581883 is active
	I0812 11:51:41.598920   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Getting domain xml...
	I0812 11:51:41.599684   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Creating domain...
	I0812 11:51:42.863317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting to get IP...
	I0812 11:51:42.864358   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864816   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864907   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:42.864802   61181 retry.go:31] will retry after 220.174363ms: waiting for machine to come up
	I0812 11:51:43.086204   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086832   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086861   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.086783   61181 retry.go:31] will retry after 342.897936ms: waiting for machine to come up
	I0812 11:51:43.431059   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431584   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.431497   61181 retry.go:31] will retry after 465.154278ms: waiting for machine to come up
	I0812 11:51:43.898042   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898580   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898604   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.898518   61181 retry.go:31] will retry after 498.287765ms: waiting for machine to come up
	I0812 11:51:44.398086   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398736   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398763   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:44.398682   61181 retry.go:31] will retry after 617.809106ms: waiting for machine to come up
	I0812 11:51:45.018733   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019307   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.019217   61181 retry.go:31] will retry after 864.46319ms: waiting for machine to come up
	I0812 11:51:45.885081   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885555   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885585   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.885529   61181 retry.go:31] will retry after 1.067767105s: waiting for machine to come up
	I0812 11:51:46.954710   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955061   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955087   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:46.955020   61181 retry.go:31] will retry after 927.472236ms: waiting for machine to come up
	I0812 11:51:47.883766   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:47.884146   61181 retry.go:31] will retry after 1.493170608s: waiting for machine to come up
	I0812 11:51:49.378898   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379350   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:49.379297   61181 retry.go:31] will retry after 1.599397392s: waiting for machine to come up
	I0812 11:51:50.981013   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981714   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981745   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:50.981642   61181 retry.go:31] will retry after 1.779019847s: waiting for machine to come up
	I0812 11:51:52.762246   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762707   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:52.762629   61181 retry.go:31] will retry after 3.410620248s: waiting for machine to come up
	I0812 11:51:56.175010   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175542   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175573   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:56.175490   61181 retry.go:31] will retry after 3.890343984s: waiting for machine to come up
	I0812 11:52:00.069904   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has current primary IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070606   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Found IP for machine: 192.168.50.114
	I0812 11:52:00.070616   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserving static IP address...
	I0812 11:52:00.071153   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserved static IP address: 192.168.50.114
	I0812 11:52:00.071183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for SSH to be available...
	I0812 11:52:00.071206   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.071228   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | skip adding static IP to network mk-default-k8s-diff-port-581883 - found existing host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"}
	I0812 11:52:00.071242   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Getting to WaitForSSH function...
	I0812 11:52:00.073315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073647   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.073676   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073838   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH client type: external
	I0812 11:52:00.073868   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa (-rw-------)
	I0812 11:52:00.073909   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:52:00.073926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | About to run SSH command:
	I0812 11:52:00.073941   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | exit 0
	I0812 11:52:00.201064   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | SSH cmd err, output: <nil>: 
	I0812 11:52:00.201417   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetConfigRaw
	I0812 11:52:00.202026   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.204566   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.204855   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.204895   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.205179   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:52:00.205369   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:52:00.205387   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:00.205698   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.208214   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.208656   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208749   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.208932   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209111   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209227   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.209359   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.209519   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.209529   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:52:00.317075   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 11:52:00.317106   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317394   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:52:00.317427   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317617   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.320809   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321256   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.321297   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321415   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.321625   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321793   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321927   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.322174   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.322337   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.322350   59908 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-581883 && echo "default-k8s-diff-port-581883" | sudo tee /etc/hostname
	I0812 11:52:00.448512   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-581883
	
	I0812 11:52:00.448544   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.451372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.451915   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.451942   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.452144   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.452341   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452510   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452661   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.452823   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.453021   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.453038   59908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-581883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-581883/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-581883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:52:00.569754   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:52:00.569791   59908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:52:00.569808   59908 buildroot.go:174] setting up certificates
	I0812 11:52:00.569818   59908 provision.go:84] configureAuth start
	I0812 11:52:00.569829   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.570114   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.572834   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.573357   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.576212   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.576717   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576915   59908 provision.go:143] copyHostCerts
	I0812 11:52:00.576979   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:52:00.576989   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:52:00.577051   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:52:00.577148   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:52:00.577157   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:52:00.577184   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:52:00.577241   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:52:00.577248   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:52:00.577270   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:52:00.577366   59908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-581883 san=[127.0.0.1 192.168.50.114 default-k8s-diff-port-581883 localhost minikube]
	I0812 11:52:01.053674   59908 provision.go:177] copyRemoteCerts
	I0812 11:52:01.053733   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:52:01.053756   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.056305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.056840   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.056894   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.057105   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.057325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.057486   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.057641   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.142765   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0812 11:52:01.168430   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:52:01.193360   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:52:01.218125   59908 provision.go:87] duration metric: took 648.29686ms to configureAuth
	I0812 11:52:01.218151   59908 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:52:01.218337   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:01.218432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.221497   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.221858   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.221887   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.222077   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.222261   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222436   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222596   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.222736   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.222963   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.222986   59908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:52:01.490986   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:52:01.491013   59908 machine.go:97] duration metric: took 1.285630113s to provisionDockerMachine
	I0812 11:52:01.491026   59908 start.go:293] postStartSetup for "default-k8s-diff-port-581883" (driver="kvm2")
	I0812 11:52:01.491038   59908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:52:01.491054   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.491385   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:52:01.491414   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.494451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.494830   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.494881   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.495025   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.495216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.495372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.495522   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.579756   59908 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:52:01.583802   59908 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:52:01.583828   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:52:01.583952   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:52:01.584051   59908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:52:01.584167   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:52:01.593940   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:01.619301   59908 start.go:296] duration metric: took 128.258855ms for postStartSetup
	I0812 11:52:01.619343   59908 fix.go:56] duration metric: took 20.045251384s for fixHost
	I0812 11:52:01.619365   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.622507   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.622917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.622954   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.623116   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.623308   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623461   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.623803   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.624015   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.624031   59908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:52:01.733552   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723463521.708750952
	
	I0812 11:52:01.733588   59908 fix.go:216] guest clock: 1723463521.708750952
	I0812 11:52:01.733613   59908 fix.go:229] Guest: 2024-08-12 11:52:01.708750952 +0000 UTC Remote: 2024-08-12 11:52:01.619347823 +0000 UTC m=+302.640031526 (delta=89.403129ms)
	I0812 11:52:01.733639   59908 fix.go:200] guest clock delta is within tolerance: 89.403129ms
	I0812 11:52:01.733646   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 20.15958144s
	I0812 11:52:01.733673   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.733971   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:01.736957   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737359   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.737388   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738113   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738404   59908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:52:01.738444   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.738710   59908 ssh_runner.go:195] Run: cat /version.json
	I0812 11:52:01.738746   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.741424   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741906   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.741935   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742092   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742293   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742487   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742501   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742693   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742709   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.742854   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.821742   59908 ssh_runner.go:195] Run: systemctl --version
	I0812 11:52:01.854649   59908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:52:01.994050   59908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:52:02.000754   59908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:52:02.000848   59908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:52:02.017212   59908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:52:02.017240   59908 start.go:495] detecting cgroup driver to use...
	I0812 11:52:02.017310   59908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:52:02.035650   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:52:02.050036   59908 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:52:02.050114   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:52:02.063916   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:52:02.078938   59908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:52:02.194945   59908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:52:02.366538   59908 docker.go:233] disabling docker service ...
	I0812 11:52:02.366616   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:52:02.380648   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:52:02.393284   59908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:52:02.513560   59908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:52:02.638028   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:52:02.662395   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:52:02.683732   59908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:52:02.683798   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.695379   59908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:52:02.695437   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.706905   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.718338   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.729708   59908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:52:02.740127   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.750198   59908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.766470   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.777845   59908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:52:02.788254   59908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:52:02.788322   59908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:52:02.800552   59908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:52:02.809932   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:02.950568   59908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:52:03.087957   59908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:52:03.088031   59908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:52:03.094543   59908 start.go:563] Will wait 60s for crictl version
	I0812 11:52:03.094597   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:52:03.098447   59908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:52:03.139477   59908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:52:03.139561   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.169931   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.202808   59908 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:52:03.203979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:03.206641   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207046   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:03.207078   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207300   59908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 11:52:03.211169   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:03.222676   59908 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:52:03.222798   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:52:03.222835   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:03.258003   59908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 11:52:03.258074   59908 ssh_runner.go:195] Run: which lz4
	I0812 11:52:03.261945   59908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 11:52:03.266002   59908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:52:03.266035   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 11:52:04.616538   59908 crio.go:462] duration metric: took 1.354621946s to copy over tarball
	I0812 11:52:04.616600   59908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:52:06.801880   59908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.185257635s)
	I0812 11:52:06.801905   59908 crio.go:469] duration metric: took 2.18534207s to extract the tarball
	I0812 11:52:06.801912   59908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:52:06.840167   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:06.887647   59908 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:52:06.887669   59908 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:52:06.887677   59908 kubeadm.go:934] updating node { 192.168.50.114 8444 v1.30.3 crio true true} ...
	I0812 11:52:06.887780   59908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-581883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:52:06.887863   59908 ssh_runner.go:195] Run: crio config
	I0812 11:52:06.944347   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:06.944372   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:06.944385   59908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:52:06.944404   59908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-581883 NodeName:default-k8s-diff-port-581883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:52:06.944582   59908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-581883"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:52:06.944660   59908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:52:06.954792   59908 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:52:06.954853   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:52:06.964625   59908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0812 11:52:06.981467   59908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:52:06.998649   59908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0812 11:52:07.017062   59908 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0812 11:52:07.020710   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:07.033442   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:07.164673   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:07.183526   59908 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883 for IP: 192.168.50.114
	I0812 11:52:07.183574   59908 certs.go:194] generating shared ca certs ...
	I0812 11:52:07.183598   59908 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:07.183769   59908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:52:07.183813   59908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:52:07.183827   59908 certs.go:256] generating profile certs ...
	I0812 11:52:07.183948   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/client.key
	I0812 11:52:07.184117   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key.ebc625f3
	I0812 11:52:07.184198   59908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key
	I0812 11:52:07.184361   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:52:07.184402   59908 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:52:07.184416   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:52:07.184448   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:52:07.184478   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:52:07.184509   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:52:07.184562   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:07.185388   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:52:07.217465   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:52:07.248781   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:52:07.278177   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:52:07.313023   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 11:52:07.336720   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:52:07.360266   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:52:07.388850   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:52:07.413532   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:52:07.438304   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:52:07.462084   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:52:07.486176   59908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:52:07.504165   59908 ssh_runner.go:195] Run: openssl version
	I0812 11:52:07.510273   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:52:07.520671   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525096   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525158   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.531038   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:52:07.542971   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:52:07.554939   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559868   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559928   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.565655   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:52:07.578139   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:52:07.589333   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594679   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594755   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.600616   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:52:07.612028   59908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:52:07.617247   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:52:07.623826   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:52:07.630443   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:52:07.637184   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:52:07.643723   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:52:07.650269   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:52:07.657049   59908 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:52:07.657136   59908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:52:07.657218   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.695064   59908 cri.go:89] found id: ""
	I0812 11:52:07.695136   59908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:52:07.705707   59908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 11:52:07.705725   59908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 11:52:07.705781   59908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 11:52:07.715748   59908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:52:07.717230   59908 kubeconfig.go:125] found "default-k8s-diff-port-581883" server: "https://192.168.50.114:8444"
	I0812 11:52:07.720217   59908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 11:52:07.730557   59908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.114
	I0812 11:52:07.730596   59908 kubeadm.go:1160] stopping kube-system containers ...
	I0812 11:52:07.730609   59908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 11:52:07.730672   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.766039   59908 cri.go:89] found id: ""
	I0812 11:52:07.766114   59908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 11:52:07.784359   59908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:52:07.794750   59908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:52:07.794781   59908 kubeadm.go:157] found existing configuration files:
	
	I0812 11:52:07.794957   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0812 11:52:07.805063   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:52:07.805137   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:52:07.815283   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0812 11:52:07.825460   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:52:07.825535   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:52:07.836322   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.846381   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:52:07.846438   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.856471   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0812 11:52:07.866349   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:52:07.866415   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:52:07.876379   59908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:52:07.886723   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:07.993071   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.756027   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.978821   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.048377   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.146562   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:52:09.146658   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:09.647073   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.147700   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.647212   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.147702   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.174640   59908 api_server.go:72] duration metric: took 2.028079757s to wait for apiserver process to appear ...
	I0812 11:52:11.174665   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:52:11.174698   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:11.175152   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:11.674838   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:16.675764   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:16.675832   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:21.676084   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:21.676129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:26.676483   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:26.676531   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.676994   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:31.677032   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.841007   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": read tcp 192.168.50.1:45150->192.168.50.114:8444: read: connection reset by peer
	I0812 11:52:32.175501   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:32.176109   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:32.675714   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:37.676528   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:37.676575   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:42.677744   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:42.677782   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:47.679062   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:47.679139   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.075690   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.075722   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.075736   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.231100   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.231129   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.231143   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.273525   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.273564   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:50.675005   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.681580   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.681621   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.175129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.188048   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.188075   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.675218   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.684784   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.684822   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.175465   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.179666   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.179686   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.675234   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.680948   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.680972   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.175533   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.180849   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.180889   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.675084   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.680320   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.680352   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.175057   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.180061   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.180087   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.675117   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.679922   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.679950   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.175569   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.179883   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:55.179908   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.675522   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.680182   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:52:55.686477   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:52:55.686505   59908 api_server.go:131] duration metric: took 44.511833813s to wait for apiserver health ...
	I0812 11:52:55.686513   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:55.686519   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:55.688415   59908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:52:55.689745   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:52:55.700910   59908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:52:55.719588   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:52:55.729581   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:52:55.729622   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 11:52:55.729630   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:52:55.729640   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 11:52:55.729651   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 11:52:55.729662   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0812 11:52:55.729673   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:52:55.729682   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:52:55.729693   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 11:52:55.729702   59908 system_pods.go:74] duration metric: took 10.095218ms to wait for pod list to return data ...
	I0812 11:52:55.729712   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:52:55.733812   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:52:55.733841   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:52:55.733857   59908 node_conditions.go:105] duration metric: took 4.136436ms to run NodePressure ...
	I0812 11:52:55.733877   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:56.014193   59908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026600   59908 kubeadm.go:739] kubelet initialised
	I0812 11:52:56.026629   59908 kubeadm.go:740] duration metric: took 12.405458ms waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026637   59908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:56.031669   59908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.042499   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042526   59908 pod_ready.go:81] duration metric: took 10.82967ms for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.042537   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042547   59908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.048265   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048290   59908 pod_ready.go:81] duration metric: took 5.732651ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.048307   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048315   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.054613   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054639   59908 pod_ready.go:81] duration metric: took 6.314697ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.054652   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054660   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.125380   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125418   59908 pod_ready.go:81] duration metric: took 70.74807ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.125433   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125441   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.523216   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523251   59908 pod_ready.go:81] duration metric: took 397.801141ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.523263   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523272   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.923229   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923269   59908 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.923285   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923295   59908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:57.323846   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323877   59908 pod_ready.go:81] duration metric: took 400.572011ms for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:57.323888   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323896   59908 pod_ready.go:38] duration metric: took 1.297248784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:57.323911   59908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:52:57.336325   59908 ops.go:34] apiserver oom_adj: -16
	I0812 11:52:57.336345   59908 kubeadm.go:597] duration metric: took 49.630615077s to restartPrimaryControlPlane
	I0812 11:52:57.336365   59908 kubeadm.go:394] duration metric: took 49.67932273s to StartCluster
	I0812 11:52:57.336380   59908 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.336447   59908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:52:57.338064   59908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.338331   59908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:52:57.338433   59908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:52:57.338521   59908 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338536   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:57.338551   59908 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338587   59908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-581883"
	I0812 11:52:57.338558   59908 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338662   59908 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:52:57.338695   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.338563   59908 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338755   59908 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338764   59908 addons.go:243] addon metrics-server should already be in state true
	I0812 11:52:57.338788   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.339032   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339033   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339035   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339067   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339084   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339065   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.340300   59908 out.go:177] * Verifying Kubernetes components...
	I0812 11:52:57.342119   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:57.356069   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0812 11:52:57.356172   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0812 11:52:57.356610   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.356723   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.357168   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357189   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357329   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357356   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357543   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.357718   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.358105   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358143   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.358331   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358367   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.360134   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 11:52:57.360536   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.361016   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.361041   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.361371   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.361569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.365260   59908 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.365279   59908 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:52:57.365312   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.365596   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.365639   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.377488   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0812 11:52:57.378076   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.378581   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0812 11:52:57.378657   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.378680   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.378965   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.379025   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.379251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.379656   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.379683   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.380105   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.380391   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.382273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.382496   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.383601   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0812 11:52:57.384062   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.384739   59908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:52:57.384750   59908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:52:57.384914   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.384940   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.385293   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.385956   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.386002   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.386314   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:52:57.386336   59908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:52:57.386355   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.386386   59908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.386398   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:52:57.386416   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.390135   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390335   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390669   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.390729   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391187   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.391251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391393   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391571   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391592   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391722   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.391758   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391921   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.431097   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0812 11:52:57.431600   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.432116   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.432140   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.432506   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.432702   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.434513   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.434753   59908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.434772   59908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:52:57.434791   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.438433   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.438917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.438951   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.439150   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.439384   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.439574   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.439744   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.547325   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:57.566163   59908 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:52:57.633469   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.641330   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:52:57.641355   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:52:57.662909   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.691294   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:52:57.691321   59908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:52:57.746668   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:57.746693   59908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:52:57.787970   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628134   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628195   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628464   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628481   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628490   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628498   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628611   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628626   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628647   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628651   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628775   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628785   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628791   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.630407   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.630424   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.634739   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.634759   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.635034   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.635053   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643171   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643484   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643502   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643511   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643520   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643532   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643732   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643754   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643762   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643771   59908 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-581883"
	I0812 11:52:58.645811   59908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:52:58.647443   59908 addons.go:510] duration metric: took 1.309010451s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:52:59.569732   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:01.570136   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:04.069965   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:05.570009   59908 node_ready.go:49] node "default-k8s-diff-port-581883" has status "Ready":"True"
	I0812 11:53:05.570039   59908 node_ready.go:38] duration metric: took 8.003840242s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:53:05.570050   59908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:53:05.577206   59908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:07.584071   59908 pod_ready.go:102] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:08.583523   59908 pod_ready.go:92] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.583550   59908 pod_ready.go:81] duration metric: took 3.006317399s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.583559   59908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589137   59908 pod_ready.go:92] pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.589163   59908 pod_ready.go:81] duration metric: took 5.595854ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589175   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593746   59908 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.593767   59908 pod_ready.go:81] duration metric: took 4.585829ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593776   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598058   59908 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.598078   59908 pod_ready.go:81] duration metric: took 4.296254ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598087   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603106   59908 pod_ready.go:92] pod "kube-proxy-h6fzz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.603127   59908 pod_ready.go:81] duration metric: took 5.033938ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603136   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981404   59908 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.981429   59908 pod_ready.go:81] duration metric: took 378.286388ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981439   59908 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:10.988175   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:13.488230   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:15.987639   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:18.487540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:20.490803   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:22.987167   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:25.488840   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:27.988661   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:30.487605   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:32.487748   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:34.488109   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:36.987016   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:38.987165   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:40.989187   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:43.487407   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:45.487714   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:47.487961   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:49.988540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:52.487216   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:54.487433   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:56.487958   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:58.489095   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:00.987353   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:02.989138   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:05.488174   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:07.988702   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:10.488396   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:12.988099   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:14.988220   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:16.988395   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:19.491228   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:21.987397   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:23.987898   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:26.487993   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:28.489384   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:30.989371   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:33.488670   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:35.987526   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:37.988823   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:40.488488   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:42.488612   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:44.989023   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:46.990079   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:49.488206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:51.488446   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:53.988007   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:56.488200   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:58.490348   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:00.988756   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:03.487527   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:05.987624   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:07.989990   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:10.487888   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:12.488656   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:14.489648   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:16.988551   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:19.488408   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:21.988902   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:24.487895   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:26.988377   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:29.488082   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:31.986995   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:33.987359   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:35.989125   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:38.489945   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:40.493189   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:42.988399   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:45.487307   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:47.487758   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:49.487798   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:51.987795   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:53.988376   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:55.990060   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:58.487684   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:00.487893   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:02.988185   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:04.988436   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:07.487867   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:09.987976   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:11.988078   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:13.988354   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:15.988676   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:18.488658   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:20.987780   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:23.486965   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:25.487065   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:27.487891   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:29.488825   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:31.988732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:34.487771   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:36.988555   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:39.489154   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:41.987687   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:43.990010   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:45.991210   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:48.487381   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:50.987943   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:53.487657   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:55.987206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:57.988164   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:59.990098   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:02.486732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:04.488492   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:06.987443   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988727   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988756   59908 pod_ready.go:81] duration metric: took 4m0.007310185s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:57:08.988768   59908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0812 11:57:08.988777   59908 pod_ready.go:38] duration metric: took 4m3.418715457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:57:08.988795   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:57:08.988823   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:08.988909   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:09.035203   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.035230   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.035236   59908 cri.go:89] found id: ""
	I0812 11:57:09.035244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:09.035298   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.039940   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.044354   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:09.044430   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:09.079692   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:09.079716   59908 cri.go:89] found id: ""
	I0812 11:57:09.079725   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:09.079788   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.084499   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:09.084576   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:09.124721   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:09.124750   59908 cri.go:89] found id: ""
	I0812 11:57:09.124761   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:09.124828   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.128921   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:09.128997   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:09.164960   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.164982   59908 cri.go:89] found id: ""
	I0812 11:57:09.164995   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:09.165046   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.169043   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:09.169116   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:09.211298   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.211322   59908 cri.go:89] found id: ""
	I0812 11:57:09.211329   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:09.211377   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.215348   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:09.215440   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:09.269500   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.269519   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.269523   59908 cri.go:89] found id: ""
	I0812 11:57:09.269530   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:09.269575   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.273724   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.277660   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:09.277732   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:09.327668   59908 cri.go:89] found id: ""
	I0812 11:57:09.327691   59908 logs.go:276] 0 containers: []
	W0812 11:57:09.327698   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:09.327703   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:09.327765   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:09.363936   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.363957   59908 cri.go:89] found id: ""
	I0812 11:57:09.363964   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:09.364010   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.368123   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:09.368151   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:09.441676   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:09.441725   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.483275   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:09.483317   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.544504   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:09.544539   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.594808   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:09.594839   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.636141   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:09.636178   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.673996   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:09.674023   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.711480   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:09.711504   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.747830   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:09.747861   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:10.268559   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:10.268607   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:10.394461   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:10.394495   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:10.439760   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:10.439796   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:10.474457   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:10.474496   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:10.515430   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:10.515464   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.029229   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:57:13.045764   59908 api_server.go:72] duration metric: took 4m15.707395821s to wait for apiserver process to appear ...
	I0812 11:57:13.045795   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:57:13.045832   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:13.045878   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:13.082792   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:13.082818   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.082824   59908 cri.go:89] found id: ""
	I0812 11:57:13.082833   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:13.082893   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.087987   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.092188   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:13.092251   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:13.135193   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:13.135226   59908 cri.go:89] found id: ""
	I0812 11:57:13.135237   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:13.135293   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.140269   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:13.140344   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:13.193436   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.193458   59908 cri.go:89] found id: ""
	I0812 11:57:13.193465   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:13.193539   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.198507   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:13.198589   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:13.241696   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:13.241718   59908 cri.go:89] found id: ""
	I0812 11:57:13.241725   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:13.241773   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.246865   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:13.246937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:13.293284   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.293308   59908 cri.go:89] found id: ""
	I0812 11:57:13.293315   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:13.293380   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.297698   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:13.297772   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:13.342737   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.342757   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:13.342760   59908 cri.go:89] found id: ""
	I0812 11:57:13.342767   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:13.342809   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.347634   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.351733   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:13.351794   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:13.394540   59908 cri.go:89] found id: ""
	I0812 11:57:13.394570   59908 logs.go:276] 0 containers: []
	W0812 11:57:13.394580   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:13.394594   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:13.394647   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:13.433910   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:13.433934   59908 cri.go:89] found id: ""
	I0812 11:57:13.433944   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:13.434001   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.437999   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:13.438024   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.451945   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:13.451973   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:13.561957   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:13.561990   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.602729   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:13.602754   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:13.673729   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:13.673766   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.714814   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:13.714843   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.755876   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:13.755902   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.814263   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:13.814301   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:14.305206   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:14.305243   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:14.349455   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:14.349486   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:14.399731   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:14.399765   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:14.443494   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:14.443524   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:14.486034   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:14.486070   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:14.524991   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:14.525018   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.062314   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:57:17.068363   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:57:17.069818   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:57:17.069845   59908 api_server.go:131] duration metric: took 4.024042567s to wait for apiserver health ...
	I0812 11:57:17.069856   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:57:17.069882   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:17.069937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:17.107213   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:17.107233   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.107237   59908 cri.go:89] found id: ""
	I0812 11:57:17.107244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:17.107297   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.117678   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.121897   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:17.121962   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:17.159450   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.159480   59908 cri.go:89] found id: ""
	I0812 11:57:17.159489   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:17.159548   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.164078   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:17.164156   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:17.207977   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.208002   59908 cri.go:89] found id: ""
	I0812 11:57:17.208010   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:17.208063   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.212055   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:17.212136   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:17.259289   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.259316   59908 cri.go:89] found id: ""
	I0812 11:57:17.259327   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:17.259393   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.263818   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:17.263896   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:17.301371   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.301404   59908 cri.go:89] found id: ""
	I0812 11:57:17.301413   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:17.301473   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.306038   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:17.306100   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:17.343982   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.344006   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.344017   59908 cri.go:89] found id: ""
	I0812 11:57:17.344027   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:17.344086   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.348135   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.352720   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:17.352790   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:17.392647   59908 cri.go:89] found id: ""
	I0812 11:57:17.392673   59908 logs.go:276] 0 containers: []
	W0812 11:57:17.392682   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:17.392687   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:17.392740   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:17.429067   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.429088   59908 cri.go:89] found id: ""
	I0812 11:57:17.429095   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:17.429140   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.433406   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:17.433433   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.479091   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:17.479123   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:17.519579   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:17.519614   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:17.620109   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:17.620143   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.659604   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:17.659639   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.712850   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:17.712901   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.750567   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:17.750595   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:17.822429   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:17.822459   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.864303   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:17.864338   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.904307   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:17.904340   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.939073   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:17.939103   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.982222   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:17.982253   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:18.369007   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:18.369053   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:18.385187   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:18.385219   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:20.949075   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:57:20.949110   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.949115   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.949119   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.949122   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.949125   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.949128   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.949133   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.949139   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.949146   59908 system_pods.go:74] duration metric: took 3.879283024s to wait for pod list to return data ...
	I0812 11:57:20.949153   59908 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:57:20.951355   59908 default_sa.go:45] found service account: "default"
	I0812 11:57:20.951376   59908 default_sa.go:55] duration metric: took 2.217928ms for default service account to be created ...
	I0812 11:57:20.951383   59908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:57:20.956479   59908 system_pods.go:86] 8 kube-system pods found
	I0812 11:57:20.956505   59908 system_pods.go:89] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.956513   59908 system_pods.go:89] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.956519   59908 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.956527   59908 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.956532   59908 system_pods.go:89] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.956537   59908 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.956546   59908 system_pods.go:89] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.956553   59908 system_pods.go:89] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.956564   59908 system_pods.go:126] duration metric: took 5.175002ms to wait for k8s-apps to be running ...
	I0812 11:57:20.956572   59908 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:57:20.956624   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:57:20.971826   59908 system_svc.go:56] duration metric: took 15.246626ms WaitForService to wait for kubelet
	I0812 11:57:20.971856   59908 kubeadm.go:582] duration metric: took 4m23.633490244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:57:20.971881   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:57:20.974643   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:57:20.974661   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:57:20.974671   59908 node_conditions.go:105] duration metric: took 2.785ms to run NodePressure ...
	I0812 11:57:20.974681   59908 start.go:241] waiting for startup goroutines ...
	I0812 11:57:20.974688   59908 start.go:246] waiting for cluster config update ...
	I0812 11:57:20.974700   59908 start.go:255] writing updated cluster config ...
	I0812 11:57:20.975043   59908 ssh_runner.go:195] Run: rm -f paused
	I0812 11:57:21.025000   59908 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:57:21.028153   59908 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-581883" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.804898373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464025804875398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f161f3ea-e159-4497-b12c-7e2f14818e58 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.805536731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71ecbc49-7e40-44cb-bd7c-f34e96eabf24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.805607043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71ecbc49-7e40-44cb-bd7c-f34e96eabf24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.805644044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71ecbc49-7e40-44cb-bd7c-f34e96eabf24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.837265797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1638bc58-4749-4660-95da-fa7957a0cd5e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.837363193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1638bc58-4749-4660-95da-fa7957a0cd5e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.839229800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f31b424-df97-4667-9dda-f5d8d6a8d0e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.839892117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464025839851929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f31b424-df97-4667-9dda-f5d8d6a8d0e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.840629987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc7e4df5-bd68-45cb-932c-cd0686b6a91a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.840704082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc7e4df5-bd68-45cb-932c-cd0686b6a91a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.840765639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fc7e4df5-bd68-45cb-932c-cd0686b6a91a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.878190102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7de0e163-21ee-4cb8-b92a-42036e697a00 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.878306105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7de0e163-21ee-4cb8-b92a-42036e697a00 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.880288096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8da9b7e5-e990-4325-b3a4-2f7e89a55911 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.880727865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464025880705491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da9b7e5-e990-4325-b3a4-2f7e89a55911 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.881535482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc57adee-f087-4de8-b3ca-48090eb5ea69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.881601695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc57adee-f087-4de8-b3ca-48090eb5ea69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.881635920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cc57adee-f087-4de8-b3ca-48090eb5ea69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.912923320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a48c399-7cf7-4ed1-b5b7-99a810d2695b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.913031578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a48c399-7cf7-4ed1-b5b7-99a810d2695b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.914342738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48c9c8f2-d4b7-4ef8-94cb-a6ea2be74f16 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.914784957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464025914756712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48c9c8f2-d4b7-4ef8-94cb-a6ea2be74f16 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.915286381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b8b0f7d-66be-4d21-b44a-9e23e5ad7630 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.915359729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b8b0f7d-66be-4d21-b44a-9e23e5ad7630 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:00:25 old-k8s-version-835962 crio[649]: time="2024-08-12 12:00:25.915395742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9b8b0f7d-66be-4d21-b44a-9e23e5ad7630 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug12 11:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.558019] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.216104] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.055590] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052853] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.197707] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.118940] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.224588] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.260019] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.065050] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.865114] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +14.292569] kauditd_printk_skb: 46 callbacks suppressed
	[Aug12 11:47] systemd-fstab-generator[5053]: Ignoring "noauto" option for root device
	[Aug12 11:49] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.063898] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:00:26 up 17 min,  0 users,  load average: 0.05, 0.03, 0.02
	Linux old-k8s-version-835962 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00024e1c0, 0xc000d74c30, 0x1, 0x0, 0x0)
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000575340)
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: goroutine 81 [select]:
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000cb1720, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000188780, 0x0, 0x0)
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000575340)
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 12 12:00:22 old-k8s-version-835962 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 12 12:00:22 old-k8s-version-835962 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 12 12:00:22 old-k8s-version-835962 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 12 12:00:23 old-k8s-version-835962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 12 12:00:23 old-k8s-version-835962 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 12 12:00:23 old-k8s-version-835962 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 12 12:00:23 old-k8s-version-835962 kubelet[6517]: I0812 12:00:23.629034    6517 server.go:416] Version: v1.20.0
	Aug 12 12:00:23 old-k8s-version-835962 kubelet[6517]: I0812 12:00:23.629280    6517 server.go:837] Client rotation is on, will bootstrap in background
	Aug 12 12:00:23 old-k8s-version-835962 kubelet[6517]: I0812 12:00:23.631185    6517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 12 12:00:23 old-k8s-version-835962 kubelet[6517]: W0812 12:00:23.632022    6517 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 12 12:00:23 old-k8s-version-835962 kubelet[6517]: I0812 12:00:23.632527    6517 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (228.293781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-835962" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-12 12:06:21.587508825 +0000 UTC m=+6370.744431404
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-581883 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-581883 logs -n 25: (1.79523155s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:02 UTC |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:03 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-567702             | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-567702                  | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-567702 image list                           | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p auto-824402 --memory=3072                           | auto-824402                  | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:06 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p kindnet-824402                                      | kindnet-824402               | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:06 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p calico-824402 --memory=3072                         | calico-824402                | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-824402 pgrep -a                                | auto-824402                  | jenkins | v1.33.1 | 12 Aug 24 12:06 UTC | 12 Aug 24 12:06 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-824402 pgrep -a                             | kindnet-824402               | jenkins | v1.33.1 | 12 Aug 24 12:06 UTC | 12 Aug 24 12:06 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:04:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:04:59.296806   66240 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:04:59.296936   66240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:59.296942   66240 out.go:304] Setting ErrFile to fd 2...
	I0812 12:04:59.296947   66240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:59.297149   66240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 12:04:59.297747   66240 out.go:298] Setting JSON to false
	I0812 12:04:59.298674   66240 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6440,"bootTime":1723457859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:04:59.298741   66240 start.go:139] virtualization: kvm guest
	I0812 12:04:59.301247   66240 out.go:177] * [calico-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:04:59.302879   66240 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 12:04:59.302922   66240 notify.go:220] Checking for updates...
	I0812 12:04:59.306149   66240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:04:59.307806   66240 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:04:59.309108   66240 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:59.310350   66240 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:04:59.311643   66240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:04:59.313732   66240 config.go:182] Loaded profile config "auto-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:59.313927   66240 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:59.314060   66240 config.go:182] Loaded profile config "kindnet-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:59.314201   66240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:04:59.353856   66240 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:04:59.355431   66240 start.go:297] selected driver: kvm2
	I0812 12:04:59.355449   66240 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:04:59.355467   66240 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:04:59.356181   66240 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:59.356273   66240 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:04:59.377819   66240 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:04:59.377885   66240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:04:59.378125   66240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:04:59.378155   66240 cni.go:84] Creating CNI manager for "calico"
	I0812 12:04:59.378163   66240 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0812 12:04:59.378246   66240 start.go:340] cluster config:
	{Name:calico-824402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:04:59.378354   66240 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:59.380603   66240 out.go:177] * Starting "calico-824402" primary control-plane node in "calico-824402" cluster
	I0812 12:04:55.251297   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:55.251166   65928 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa...
	I0812 12:04:55.425200   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:55.425065   65928 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/kindnet-824402.rawdisk...
	I0812 12:04:55.425232   65845 main.go:141] libmachine: (kindnet-824402) DBG | Writing magic tar header
	I0812 12:04:55.425247   65845 main.go:141] libmachine: (kindnet-824402) DBG | Writing SSH key tar header
	I0812 12:04:55.425256   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:55.425201   65928 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402 ...
	I0812 12:04:55.425422   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402
	I0812 12:04:55.425451   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 12:04:55.425467   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402 (perms=drwx------)
	I0812 12:04:55.425487   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:04:55.425498   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 12:04:55.425512   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 12:04:55.425522   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:04:55.425536   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:55.425548   65845 main.go:141] libmachine: (kindnet-824402) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:04:55.425564   65845 main.go:141] libmachine: (kindnet-824402) Creating domain...
	I0812 12:04:55.425587   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 12:04:55.425602   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:04:55.425619   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:04:55.425634   65845 main.go:141] libmachine: (kindnet-824402) DBG | Checking permissions on dir: /home
	I0812 12:04:55.425644   65845 main.go:141] libmachine: (kindnet-824402) DBG | Skipping /home - not owner
	I0812 12:04:55.426758   65845 main.go:141] libmachine: (kindnet-824402) define libvirt domain using xml: 
	I0812 12:04:55.426784   65845 main.go:141] libmachine: (kindnet-824402) <domain type='kvm'>
	I0812 12:04:55.426796   65845 main.go:141] libmachine: (kindnet-824402)   <name>kindnet-824402</name>
	I0812 12:04:55.426806   65845 main.go:141] libmachine: (kindnet-824402)   <memory unit='MiB'>3072</memory>
	I0812 12:04:55.426814   65845 main.go:141] libmachine: (kindnet-824402)   <vcpu>2</vcpu>
	I0812 12:04:55.426821   65845 main.go:141] libmachine: (kindnet-824402)   <features>
	I0812 12:04:55.426840   65845 main.go:141] libmachine: (kindnet-824402)     <acpi/>
	I0812 12:04:55.426850   65845 main.go:141] libmachine: (kindnet-824402)     <apic/>
	I0812 12:04:55.426860   65845 main.go:141] libmachine: (kindnet-824402)     <pae/>
	I0812 12:04:55.426870   65845 main.go:141] libmachine: (kindnet-824402)     
	I0812 12:04:55.426881   65845 main.go:141] libmachine: (kindnet-824402)   </features>
	I0812 12:04:55.426902   65845 main.go:141] libmachine: (kindnet-824402)   <cpu mode='host-passthrough'>
	I0812 12:04:55.426914   65845 main.go:141] libmachine: (kindnet-824402)   
	I0812 12:04:55.426924   65845 main.go:141] libmachine: (kindnet-824402)   </cpu>
	I0812 12:04:55.426933   65845 main.go:141] libmachine: (kindnet-824402)   <os>
	I0812 12:04:55.426944   65845 main.go:141] libmachine: (kindnet-824402)     <type>hvm</type>
	I0812 12:04:55.426955   65845 main.go:141] libmachine: (kindnet-824402)     <boot dev='cdrom'/>
	I0812 12:04:55.426966   65845 main.go:141] libmachine: (kindnet-824402)     <boot dev='hd'/>
	I0812 12:04:55.426976   65845 main.go:141] libmachine: (kindnet-824402)     <bootmenu enable='no'/>
	I0812 12:04:55.426987   65845 main.go:141] libmachine: (kindnet-824402)   </os>
	I0812 12:04:55.426999   65845 main.go:141] libmachine: (kindnet-824402)   <devices>
	I0812 12:04:55.427011   65845 main.go:141] libmachine: (kindnet-824402)     <disk type='file' device='cdrom'>
	I0812 12:04:55.427029   65845 main.go:141] libmachine: (kindnet-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/boot2docker.iso'/>
	I0812 12:04:55.427044   65845 main.go:141] libmachine: (kindnet-824402)       <target dev='hdc' bus='scsi'/>
	I0812 12:04:55.427056   65845 main.go:141] libmachine: (kindnet-824402)       <readonly/>
	I0812 12:04:55.427066   65845 main.go:141] libmachine: (kindnet-824402)     </disk>
	I0812 12:04:55.427077   65845 main.go:141] libmachine: (kindnet-824402)     <disk type='file' device='disk'>
	I0812 12:04:55.427090   65845 main.go:141] libmachine: (kindnet-824402)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:04:55.427106   65845 main.go:141] libmachine: (kindnet-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/kindnet-824402.rawdisk'/>
	I0812 12:04:55.427121   65845 main.go:141] libmachine: (kindnet-824402)       <target dev='hda' bus='virtio'/>
	I0812 12:04:55.427132   65845 main.go:141] libmachine: (kindnet-824402)     </disk>
	I0812 12:04:55.427141   65845 main.go:141] libmachine: (kindnet-824402)     <interface type='network'>
	I0812 12:04:55.427155   65845 main.go:141] libmachine: (kindnet-824402)       <source network='mk-kindnet-824402'/>
	I0812 12:04:55.427167   65845 main.go:141] libmachine: (kindnet-824402)       <model type='virtio'/>
	I0812 12:04:55.427179   65845 main.go:141] libmachine: (kindnet-824402)     </interface>
	I0812 12:04:55.427190   65845 main.go:141] libmachine: (kindnet-824402)     <interface type='network'>
	I0812 12:04:55.427203   65845 main.go:141] libmachine: (kindnet-824402)       <source network='default'/>
	I0812 12:04:55.427218   65845 main.go:141] libmachine: (kindnet-824402)       <model type='virtio'/>
	I0812 12:04:55.427229   65845 main.go:141] libmachine: (kindnet-824402)     </interface>
	I0812 12:04:55.427239   65845 main.go:141] libmachine: (kindnet-824402)     <serial type='pty'>
	I0812 12:04:55.427249   65845 main.go:141] libmachine: (kindnet-824402)       <target port='0'/>
	I0812 12:04:55.427259   65845 main.go:141] libmachine: (kindnet-824402)     </serial>
	I0812 12:04:55.427271   65845 main.go:141] libmachine: (kindnet-824402)     <console type='pty'>
	I0812 12:04:55.427282   65845 main.go:141] libmachine: (kindnet-824402)       <target type='serial' port='0'/>
	I0812 12:04:55.427294   65845 main.go:141] libmachine: (kindnet-824402)     </console>
	I0812 12:04:55.427306   65845 main.go:141] libmachine: (kindnet-824402)     <rng model='virtio'>
	I0812 12:04:55.427315   65845 main.go:141] libmachine: (kindnet-824402)       <backend model='random'>/dev/random</backend>
	I0812 12:04:55.427333   65845 main.go:141] libmachine: (kindnet-824402)     </rng>
	I0812 12:04:55.427343   65845 main.go:141] libmachine: (kindnet-824402)     
	I0812 12:04:55.427352   65845 main.go:141] libmachine: (kindnet-824402)     
	I0812 12:04:55.427362   65845 main.go:141] libmachine: (kindnet-824402)   </devices>
	I0812 12:04:55.427377   65845 main.go:141] libmachine: (kindnet-824402) </domain>
	I0812 12:04:55.427392   65845 main.go:141] libmachine: (kindnet-824402) 
	I0812 12:04:55.432391   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:ad:d2:c7 in network default
	I0812 12:04:55.433067   65845 main.go:141] libmachine: (kindnet-824402) Ensuring networks are active...
	I0812 12:04:55.433095   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:55.433898   65845 main.go:141] libmachine: (kindnet-824402) Ensuring network default is active
	I0812 12:04:55.434213   65845 main.go:141] libmachine: (kindnet-824402) Ensuring network mk-kindnet-824402 is active
	I0812 12:04:55.434705   65845 main.go:141] libmachine: (kindnet-824402) Getting domain xml...
	I0812 12:04:55.435480   65845 main.go:141] libmachine: (kindnet-824402) Creating domain...
	I0812 12:04:56.970810   65845 main.go:141] libmachine: (kindnet-824402) Waiting to get IP...
	I0812 12:04:56.971706   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:56.972236   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:56.972273   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:56.972228   65928 retry.go:31] will retry after 239.328276ms: waiting for machine to come up
	I0812 12:04:57.213696   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:57.214256   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:57.214278   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:57.214207   65928 retry.go:31] will retry after 254.07428ms: waiting for machine to come up
	I0812 12:04:57.469796   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:57.470451   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:57.470475   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:57.470407   65928 retry.go:31] will retry after 336.899595ms: waiting for machine to come up
	I0812 12:04:57.808819   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:57.809650   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:57.809682   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:57.809566   65928 retry.go:31] will retry after 413.553053ms: waiting for machine to come up
	I0812 12:04:58.225139   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:58.225808   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:58.225835   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:58.225760   65928 retry.go:31] will retry after 711.347449ms: waiting for machine to come up
	I0812 12:04:58.966459   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:58.967065   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:58.967106   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:58.967014   65928 retry.go:31] will retry after 931.062807ms: waiting for machine to come up
	I0812 12:04:59.899452   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:04:59.900201   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:04:59.900232   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:59.900146   65928 retry.go:31] will retry after 826.751905ms: waiting for machine to come up
	I0812 12:04:56.458428   65466 main.go:141] libmachine: (auto-824402) Calling .GetIP
	I0812 12:04:56.461824   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:56.462400   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:56.462422   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:56.462800   65466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:04:56.468141   65466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:04:56.484154   65466 kubeadm.go:883] updating cluster {Name:auto-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:04:56.484290   65466 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:04:56.484353   65466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:04:56.524726   65466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:04:56.524797   65466 ssh_runner.go:195] Run: which lz4
	I0812 12:04:56.529009   65466 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:04:56.533205   65466 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:04:56.533245   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:04:58.020538   65466 crio.go:462] duration metric: took 1.491571786s to copy over tarball
	I0812 12:04:58.020633   65466 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:05:00.624324   65466 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.603656269s)
	I0812 12:05:00.624368   65466 crio.go:469] duration metric: took 2.603792141s to extract the tarball
	I0812 12:05:00.624379   65466 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:05:00.667595   65466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:05:00.708415   65466 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:05:00.708440   65466 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:05:00.708450   65466 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.30.3 crio true true} ...
	I0812 12:05:00.708607   65466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-824402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:05:00.708699   65466 ssh_runner.go:195] Run: crio config
	I0812 12:05:00.758727   65466 cni.go:84] Creating CNI manager for ""
	I0812 12:05:00.758747   65466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 12:05:00.758757   65466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:05:00.758785   65466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-824402 NodeName:auto-824402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:05:00.758941   65466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-824402"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:05:00.759011   65466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:05:00.768944   65466 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:05:00.769004   65466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:05:00.778614   65466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0812 12:05:00.795104   65466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:05:00.812358   65466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0812 12:05:00.829770   65466 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0812 12:05:00.833860   65466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:05:00.846780   65466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:00.978705   65466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:05:01.001198   65466 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402 for IP: 192.168.39.142
	I0812 12:05:01.001221   65466 certs.go:194] generating shared ca certs ...
	I0812 12:05:01.001240   65466 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.001416   65466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 12:05:01.001473   65466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 12:05:01.001488   65466 certs.go:256] generating profile certs ...
	I0812 12:05:01.001560   65466 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.key
	I0812 12:05:01.001597   65466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.crt with IP's: []
	I0812 12:05:01.094388   65466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.crt ...
	I0812 12:05:01.094420   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.crt: {Name:mk61419010b3bd679dadb9d016038bc42336c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.094592   65466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.key ...
	I0812 12:05:01.094602   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/client.key: {Name:mk442f709335d9feb57c62fdeceab8e5f7f88aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.094672   65466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key.1c67a7bf
	I0812 12:05:01.094690   65466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt.1c67a7bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0812 12:05:01.337108   65466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt.1c67a7bf ...
	I0812 12:05:01.337138   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt.1c67a7bf: {Name:mk4254a69dee580f1fa5f8d97e62e7f82f710ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.337327   65466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key.1c67a7bf ...
	I0812 12:05:01.337346   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key.1c67a7bf: {Name:mk9fa0ea5a479ab955e0499428a02f051e2fa927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.337450   65466 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt.1c67a7bf -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt
	I0812 12:05:01.337547   65466 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key.1c67a7bf -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key
	I0812 12:05:01.337621   65466 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.key
	I0812 12:05:01.337643   65466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.crt with IP's: []
	I0812 12:05:01.640638   65466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.crt ...
	I0812 12:05:01.640669   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.crt: {Name:mk28f5cadf586b1256ab1c28f158fede2625206d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.640847   65466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.key ...
	I0812 12:05:01.640862   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.key: {Name:mka0791a0942b3a34d9f04ac56f203b743c5ba6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:01.641111   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 12:05:01.641149   65466 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 12:05:01.641163   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 12:05:01.641199   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 12:05:01.641226   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:05:01.641264   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 12:05:01.641326   65466 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:05:01.641925   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:05:01.670566   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:05:01.718862   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:05:01.755542   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:05:01.780612   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0812 12:05:01.811140   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:05:01.838516   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:05:01.866151   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 12:05:01.893195   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 12:05:01.920106   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:05:01.947022   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 12:05:01.973791   65466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:05:01.992138   65466 ssh_runner.go:195] Run: openssl version
	I0812 12:05:01.998160   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:05:02.009800   65466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:02.014763   65466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:02.014842   65466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:02.021444   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:05:02.033454   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 12:05:02.045966   65466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 12:05:02.050752   65466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 12:05:02.050816   65466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 12:05:02.056896   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 12:05:02.068159   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 12:05:02.079323   65466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 12:05:02.083840   65466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 12:05:02.083923   65466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 12:05:02.089945   65466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:05:02.105165   65466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:05:02.109334   65466 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:05:02.109388   65466 kubeadm.go:392] StartCluster: {Name:auto-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:05:02.109477   65466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:05:02.109528   65466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:05:02.143585   65466 cri.go:89] found id: ""
	I0812 12:05:02.143658   65466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:05:02.153443   65466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:05:02.163193   65466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:05:02.172605   65466 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:05:02.172623   65466 kubeadm.go:157] found existing configuration files:
	
	I0812 12:05:02.172677   65466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:05:02.181798   65466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:05:02.181861   65466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:05:02.191297   65466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:05:02.200016   65466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:05:02.200083   65466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:05:02.209910   65466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:05:02.219081   65466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:05:02.219140   65466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:05:02.229231   65466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:05:02.238273   65466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:05:02.238348   65466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:05:02.247995   65466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:05:02.308192   65466 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 12:05:02.308267   65466 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 12:05:02.445210   65466 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 12:05:02.445345   65466 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 12:05:02.445463   65466 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 12:05:02.671065   65466 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 12:04:59.382077   66240 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:04:59.382127   66240 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:04:59.382137   66240 cache.go:56] Caching tarball of preloaded images
	I0812 12:04:59.382260   66240 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:04:59.382273   66240 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:04:59.382399   66240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/config.json ...
	I0812 12:04:59.382424   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/config.json: {Name:mk9f53fd51a474418b077292054cfd9d418ff0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:04:59.382611   66240 start.go:360] acquireMachinesLock for calico-824402: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:05:00.728728   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:00.729301   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:00.729326   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:00.729248   65928 retry.go:31] will retry after 1.177214723s: waiting for machine to come up
	I0812 12:05:01.908362   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:01.908884   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:01.908908   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:01.908821   65928 retry.go:31] will retry after 1.524009292s: waiting for machine to come up
	I0812 12:05:03.434123   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:03.434646   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:03.434673   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:03.434598   65928 retry.go:31] will retry after 2.081435072s: waiting for machine to come up
	I0812 12:05:02.790224   65466 out.go:204]   - Generating certificates and keys ...
	I0812 12:05:02.790339   65466 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 12:05:02.790425   65466 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 12:05:02.790562   65466 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 12:05:02.992419   65466 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 12:05:03.446844   65466 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 12:05:03.819489   65466 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 12:05:04.335774   65466 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 12:05:04.336018   65466 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-824402 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0812 12:05:04.422073   65466 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 12:05:04.422329   65466 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-824402 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0812 12:05:04.685098   65466 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 12:05:04.992478   65466 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 12:05:05.486482   65466 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 12:05:05.486588   65466 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 12:05:05.872391   65466 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 12:05:06.016145   65466 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 12:05:06.373785   65466 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 12:05:06.514397   65466 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 12:05:06.649789   65466 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 12:05:06.650291   65466 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 12:05:06.652729   65466 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 12:05:05.517598   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:05.518148   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:05.518185   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:05.518086   65928 retry.go:31] will retry after 2.638024201s: waiting for machine to come up
	I0812 12:05:08.158584   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:08.159112   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:08.159141   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:08.159056   65928 retry.go:31] will retry after 2.596595082s: waiting for machine to come up
	I0812 12:05:06.654780   65466 out.go:204]   - Booting up control plane ...
	I0812 12:05:06.654930   65466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 12:05:06.655025   65466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 12:05:06.655113   65466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 12:05:06.673767   65466 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 12:05:06.674253   65466 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 12:05:06.674323   65466 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 12:05:06.824011   65466 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 12:05:06.824172   65466 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 12:05:07.326077   65466 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.434188ms
	I0812 12:05:07.326239   65466 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 12:05:12.827884   65466 kubeadm.go:310] [api-check] The API server is healthy after 5.503050239s
	I0812 12:05:12.843386   65466 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 12:05:12.856282   65466 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 12:05:12.883290   65466 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 12:05:12.883558   65466 kubeadm.go:310] [mark-control-plane] Marking the node auto-824402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 12:05:12.894692   65466 kubeadm.go:310] [bootstrap-token] Using token: 1x0y17.5sril0n3ujvlzfsh
	I0812 12:05:12.896162   65466 out.go:204]   - Configuring RBAC rules ...
	I0812 12:05:12.896313   65466 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 12:05:12.904556   65466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 12:05:12.923342   65466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 12:05:12.927765   65466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 12:05:12.932003   65466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 12:05:12.936805   65466 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 12:05:13.236806   65466 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 12:05:13.686879   65466 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 12:05:14.233242   65466 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 12:05:14.233459   65466 kubeadm.go:310] 
	I0812 12:05:14.233522   65466 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 12:05:14.233555   65466 kubeadm.go:310] 
	I0812 12:05:14.233670   65466 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 12:05:14.233680   65466 kubeadm.go:310] 
	I0812 12:05:14.233720   65466 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 12:05:14.233822   65466 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 12:05:14.233902   65466 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 12:05:14.233912   65466 kubeadm.go:310] 
	I0812 12:05:14.233979   65466 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 12:05:14.233989   65466 kubeadm.go:310] 
	I0812 12:05:14.234053   65466 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 12:05:14.234062   65466 kubeadm.go:310] 
	I0812 12:05:14.234126   65466 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 12:05:14.234242   65466 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 12:05:14.234330   65466 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 12:05:14.234344   65466 kubeadm.go:310] 
	I0812 12:05:14.234460   65466 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 12:05:14.234612   65466 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 12:05:14.234627   65466 kubeadm.go:310] 
	I0812 12:05:14.234701   65466 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1x0y17.5sril0n3ujvlzfsh \
	I0812 12:05:14.234837   65466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 12:05:14.234886   65466 kubeadm.go:310] 	--control-plane 
	I0812 12:05:14.234900   65466 kubeadm.go:310] 
	I0812 12:05:14.234977   65466 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 12:05:14.234986   65466 kubeadm.go:310] 
	I0812 12:05:14.235067   65466 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1x0y17.5sril0n3ujvlzfsh \
	I0812 12:05:14.235209   65466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 12:05:14.235635   65466 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 12:05:14.235667   65466 cni.go:84] Creating CNI manager for ""
	I0812 12:05:14.235682   65466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 12:05:14.237655   65466 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 12:05:10.757415   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:10.757979   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:10.758041   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:10.757952   65928 retry.go:31] will retry after 3.7132263s: waiting for machine to come up
	I0812 12:05:14.474910   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:14.475429   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find current IP address of domain kindnet-824402 in network mk-kindnet-824402
	I0812 12:05:14.475461   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:05:14.475383   65928 retry.go:31] will retry after 3.427708845s: waiting for machine to come up
	I0812 12:05:14.239213   65466 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 12:05:14.250193   65466 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 12:05:14.271916   65466 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 12:05:14.271998   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:14.272017   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-824402 minikube.k8s.io/updated_at=2024_08_12T12_05_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=auto-824402 minikube.k8s.io/primary=true
	I0812 12:05:14.305171   65466 ops.go:34] apiserver oom_adj: -16
	I0812 12:05:14.422108   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:14.923162   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:19.489887   66240 start.go:364] duration metric: took 20.107249863s to acquireMachinesLock for "calico-824402"
	I0812 12:05:19.489952   66240 start.go:93] Provisioning new machine with config: &{Name:calico-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:05:19.490025   66240 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:05:17.906062   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:17.906521   65845 main.go:141] libmachine: (kindnet-824402) Found IP for machine: 192.168.72.181
	I0812 12:05:17.906561   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has current primary IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:17.906572   65845 main.go:141] libmachine: (kindnet-824402) Reserving static IP address...
	I0812 12:05:17.906953   65845 main.go:141] libmachine: (kindnet-824402) DBG | unable to find host DHCP lease matching {name: "kindnet-824402", mac: "52:54:00:3a:eb:02", ip: "192.168.72.181"} in network mk-kindnet-824402
	I0812 12:05:17.995460   65845 main.go:141] libmachine: (kindnet-824402) DBG | Getting to WaitForSSH function...
	I0812 12:05:17.995492   65845 main.go:141] libmachine: (kindnet-824402) Reserved static IP address: 192.168.72.181
	I0812 12:05:17.995505   65845 main.go:141] libmachine: (kindnet-824402) Waiting for SSH to be available...
	I0812 12:05:17.998656   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:17.998927   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:17.998956   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:17.999114   65845 main.go:141] libmachine: (kindnet-824402) DBG | Using SSH client type: external
	I0812 12:05:17.999144   65845 main.go:141] libmachine: (kindnet-824402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa (-rw-------)
	I0812 12:05:17.999182   65845 main.go:141] libmachine: (kindnet-824402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:05:17.999212   65845 main.go:141] libmachine: (kindnet-824402) DBG | About to run SSH command:
	I0812 12:05:17.999228   65845 main.go:141] libmachine: (kindnet-824402) DBG | exit 0
	I0812 12:05:18.125036   65845 main.go:141] libmachine: (kindnet-824402) DBG | SSH cmd err, output: <nil>: 
	I0812 12:05:18.125351   65845 main.go:141] libmachine: (kindnet-824402) KVM machine creation complete!
	I0812 12:05:18.125646   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetConfigRaw
	I0812 12:05:18.126228   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:18.126499   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:18.126690   65845 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:05:18.126707   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetState
	I0812 12:05:18.128032   65845 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:05:18.128046   65845 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:05:18.128052   65845 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:05:18.128057   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.130945   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.131588   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.131615   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.131880   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.132084   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.132250   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.132479   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.132662   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:18.132906   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:18.132919   65845 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:05:18.236611   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:05:18.236632   65845 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:05:18.236639   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.239692   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.240100   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.240128   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.240376   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.240695   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.240892   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.241106   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.241298   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:18.241469   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:18.241481   65845 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:05:18.345521   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:05:18.345611   65845 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:05:18.345630   65845 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:05:18.345645   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetMachineName
	I0812 12:05:18.345958   65845 buildroot.go:166] provisioning hostname "kindnet-824402"
	I0812 12:05:18.345990   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetMachineName
	I0812 12:05:18.346189   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.349046   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.349378   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.349404   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.349590   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.349791   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.349982   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.350134   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.350299   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:18.350474   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:18.350485   65845 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-824402 && echo "kindnet-824402" | sudo tee /etc/hostname
	I0812 12:05:18.467336   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-824402
	
	I0812 12:05:18.467363   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.470345   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.470710   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.470736   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.470931   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.471120   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.471294   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.471433   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.471625   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:18.471808   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:18.471832   65845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-824402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-824402/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-824402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:05:18.586141   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:05:18.586199   65845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 12:05:18.586241   65845 buildroot.go:174] setting up certificates
	I0812 12:05:18.586255   65845 provision.go:84] configureAuth start
	I0812 12:05:18.586271   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetMachineName
	I0812 12:05:18.586575   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetIP
	I0812 12:05:18.589621   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.590159   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.590189   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.590447   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.593055   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.593415   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.593446   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.593631   65845 provision.go:143] copyHostCerts
	I0812 12:05:18.593704   65845 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 12:05:18.593717   65845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 12:05:18.593786   65845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 12:05:18.593913   65845 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 12:05:18.593926   65845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 12:05:18.593961   65845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 12:05:18.594018   65845 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 12:05:18.594025   65845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 12:05:18.594043   65845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 12:05:18.594088   65845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.kindnet-824402 san=[127.0.0.1 192.168.72.181 kindnet-824402 localhost minikube]
	I0812 12:05:18.803275   65845 provision.go:177] copyRemoteCerts
	I0812 12:05:18.803334   65845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:05:18.803356   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.806192   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.806567   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.806603   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.806784   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.807013   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.807154   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.807268   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:18.887373   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 12:05:18.912106   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0812 12:05:18.940783   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:05:18.967489   65845 provision.go:87] duration metric: took 381.192894ms to configureAuth
	I0812 12:05:18.967522   65845 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:05:18.967730   65845 config.go:182] Loaded profile config "kindnet-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:05:18.967818   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:18.970823   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.971266   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:18.971303   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:18.971572   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:18.971797   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.972007   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:18.972200   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:18.972387   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:18.972594   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:18.972613   65845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:05:19.245356   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:05:19.245389   65845 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:05:19.245400   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetURL
	I0812 12:05:19.246789   65845 main.go:141] libmachine: (kindnet-824402) DBG | Using libvirt version 6000000
	I0812 12:05:19.249242   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.249735   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.249760   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.249946   65845 main.go:141] libmachine: Docker is up and running!
	I0812 12:05:19.249966   65845 main.go:141] libmachine: Reticulating splines...
	I0812 12:05:19.249973   65845 client.go:171] duration metric: took 24.32473135s to LocalClient.Create
	I0812 12:05:19.249996   65845 start.go:167] duration metric: took 24.324793174s to libmachine.API.Create "kindnet-824402"
	I0812 12:05:19.250008   65845 start.go:293] postStartSetup for "kindnet-824402" (driver="kvm2")
	I0812 12:05:19.250020   65845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:05:19.250056   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:19.250293   65845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:05:19.250321   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:19.252559   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.252968   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.252999   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.253139   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:19.253353   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:19.253533   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:19.253689   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:19.339115   65845 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:05:19.343225   65845 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:05:19.343259   65845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 12:05:19.343331   65845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 12:05:19.343444   65845 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 12:05:19.343600   65845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:05:19.353017   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:05:19.378642   65845 start.go:296] duration metric: took 128.619925ms for postStartSetup
	I0812 12:05:19.378702   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetConfigRaw
	I0812 12:05:19.379418   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetIP
	I0812 12:05:19.382137   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.382629   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.382659   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.382933   65845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/config.json ...
	I0812 12:05:19.383117   65845 start.go:128] duration metric: took 24.480409557s to createHost
	I0812 12:05:19.383138   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:19.385734   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.386117   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.386137   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.386291   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:19.386481   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:19.386651   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:19.386830   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:19.386978   65845 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:19.387172   65845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0812 12:05:19.387187   65845 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:05:19.489720   65845 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464319.462580074
	
	I0812 12:05:19.489745   65845 fix.go:216] guest clock: 1723464319.462580074
	I0812 12:05:19.489755   65845 fix.go:229] Guest: 2024-08-12 12:05:19.462580074 +0000 UTC Remote: 2024-08-12 12:05:19.383128309 +0000 UTC m=+34.310344020 (delta=79.451765ms)
	I0812 12:05:19.489781   65845 fix.go:200] guest clock delta is within tolerance: 79.451765ms
	I0812 12:05:19.489808   65845 start.go:83] releasing machines lock for "kindnet-824402", held for 24.587262707s
	I0812 12:05:19.489837   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:19.490153   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetIP
	I0812 12:05:19.493485   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.493927   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.493954   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.494132   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:19.494818   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:19.495085   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:19.495197   65845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:05:19.495270   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:19.495391   65845 ssh_runner.go:195] Run: cat /version.json
	I0812 12:05:19.495418   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:19.498422   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.498717   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.498876   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.498907   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.499153   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:19.499183   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:19.499204   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:19.499325   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:19.499425   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:19.499532   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:19.499604   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:19.499697   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:19.499790   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:19.499970   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:19.582350   65845 ssh_runner.go:195] Run: systemctl --version
	I0812 12:05:19.619800   65845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:05:19.780841   65845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:05:19.787202   65845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:05:19.787278   65845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:05:19.804315   65845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:05:19.804345   65845 start.go:495] detecting cgroup driver to use...
	I0812 12:05:19.804405   65845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:05:19.825116   65845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:05:19.846034   65845 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:05:19.846123   65845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:05:19.864147   65845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:05:19.878890   65845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:05:20.031858   65845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:05:15.422401   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:15.922844   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:16.422813   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:16.922546   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:17.422782   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:17.922707   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:18.422460   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:18.923076   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:19.422157   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:19.922461   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:20.198065   65845 docker.go:233] disabling docker service ...
	I0812 12:05:20.198132   65845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:05:20.213173   65845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:05:20.227087   65845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:05:20.374540   65845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:05:20.515985   65845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:05:20.542507   65845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:05:20.562172   65845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:05:20.562234   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.574046   65845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:05:20.574114   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.584854   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.595450   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.606167   65845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:05:20.617135   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.628254   65845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.647622   65845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:20.658061   65845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:05:20.668341   65845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:05:20.668416   65845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:05:20.681062   65845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:05:20.692408   65845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:20.820110   65845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:05:20.973226   65845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:05:20.973302   65845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:05:20.978610   65845 start.go:563] Will wait 60s for crictl version
	I0812 12:05:20.978708   65845 ssh_runner.go:195] Run: which crictl
	I0812 12:05:20.983396   65845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:05:21.026090   65845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:05:21.026181   65845 ssh_runner.go:195] Run: crio --version
	I0812 12:05:21.054671   65845 ssh_runner.go:195] Run: crio --version
	I0812 12:05:21.089320   65845 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:05:19.492482   66240 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 12:05:19.492722   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:19.492778   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:19.514236   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0812 12:05:19.514767   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:19.515371   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:05:19.515387   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:19.515818   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:19.515991   66240 main.go:141] libmachine: (calico-824402) Calling .GetMachineName
	I0812 12:05:19.516167   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:19.516315   66240 start.go:159] libmachine.API.Create for "calico-824402" (driver="kvm2")
	I0812 12:05:19.516345   66240 client.go:168] LocalClient.Create starting
	I0812 12:05:19.516381   66240 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 12:05:19.516426   66240 main.go:141] libmachine: Decoding PEM data...
	I0812 12:05:19.516443   66240 main.go:141] libmachine: Parsing certificate...
	I0812 12:05:19.516498   66240 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 12:05:19.516518   66240 main.go:141] libmachine: Decoding PEM data...
	I0812 12:05:19.516531   66240 main.go:141] libmachine: Parsing certificate...
	I0812 12:05:19.516546   66240 main.go:141] libmachine: Running pre-create checks...
	I0812 12:05:19.516554   66240 main.go:141] libmachine: (calico-824402) Calling .PreCreateCheck
	I0812 12:05:19.517015   66240 main.go:141] libmachine: (calico-824402) Calling .GetConfigRaw
	I0812 12:05:19.517668   66240 main.go:141] libmachine: Creating machine...
	I0812 12:05:19.517686   66240 main.go:141] libmachine: (calico-824402) Calling .Create
	I0812 12:05:19.517807   66240 main.go:141] libmachine: (calico-824402) Creating KVM machine...
	I0812 12:05:19.519222   66240 main.go:141] libmachine: (calico-824402) DBG | found existing default KVM network
	I0812 12:05:19.520934   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:19.520699   66402 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:9a:98} reservation:<nil>}
	I0812 12:05:19.521882   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:19.521790   66402 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:11:43} reservation:<nil>}
	I0812 12:05:19.523325   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:19.523245   66402 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205c20}
	I0812 12:05:19.523386   66240 main.go:141] libmachine: (calico-824402) DBG | created network xml: 
	I0812 12:05:19.523407   66240 main.go:141] libmachine: (calico-824402) DBG | <network>
	I0812 12:05:19.523417   66240 main.go:141] libmachine: (calico-824402) DBG |   <name>mk-calico-824402</name>
	I0812 12:05:19.523442   66240 main.go:141] libmachine: (calico-824402) DBG |   <dns enable='no'/>
	I0812 12:05:19.523459   66240 main.go:141] libmachine: (calico-824402) DBG |   
	I0812 12:05:19.523472   66240 main.go:141] libmachine: (calico-824402) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0812 12:05:19.523480   66240 main.go:141] libmachine: (calico-824402) DBG |     <dhcp>
	I0812 12:05:19.523494   66240 main.go:141] libmachine: (calico-824402) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0812 12:05:19.523505   66240 main.go:141] libmachine: (calico-824402) DBG |     </dhcp>
	I0812 12:05:19.523519   66240 main.go:141] libmachine: (calico-824402) DBG |   </ip>
	I0812 12:05:19.523528   66240 main.go:141] libmachine: (calico-824402) DBG |   
	I0812 12:05:19.523537   66240 main.go:141] libmachine: (calico-824402) DBG | </network>
	I0812 12:05:19.523549   66240 main.go:141] libmachine: (calico-824402) DBG | 
	I0812 12:05:19.529506   66240 main.go:141] libmachine: (calico-824402) DBG | trying to create private KVM network mk-calico-824402 192.168.61.0/24...
	I0812 12:05:19.610850   66240 main.go:141] libmachine: (calico-824402) DBG | private KVM network mk-calico-824402 192.168.61.0/24 created
	I0812 12:05:19.610903   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:19.610817   66402 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:05:19.610939   66240 main.go:141] libmachine: (calico-824402) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402 ...
	I0812 12:05:19.610969   66240 main.go:141] libmachine: (calico-824402) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:05:19.611097   66240 main.go:141] libmachine: (calico-824402) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:05:19.862853   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:19.862706   66402 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa...
	I0812 12:05:20.186848   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:20.186704   66402 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/calico-824402.rawdisk...
	I0812 12:05:20.186880   66240 main.go:141] libmachine: (calico-824402) DBG | Writing magic tar header
	I0812 12:05:20.186894   66240 main.go:141] libmachine: (calico-824402) DBG | Writing SSH key tar header
	I0812 12:05:20.186902   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:20.186871   66402 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402 ...
	I0812 12:05:20.187051   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402
	I0812 12:05:20.187112   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402 (perms=drwx------)
	I0812 12:05:20.187123   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 12:05:20.187132   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:05:20.187138   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 12:05:20.187146   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:05:20.187152   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:05:20.187165   66240 main.go:141] libmachine: (calico-824402) DBG | Checking permissions on dir: /home
	I0812 12:05:20.187184   66240 main.go:141] libmachine: (calico-824402) DBG | Skipping /home - not owner
	I0812 12:05:20.187200   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:05:20.187226   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 12:05:20.187247   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 12:05:20.187262   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:05:20.187273   66240 main.go:141] libmachine: (calico-824402) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:05:20.187296   66240 main.go:141] libmachine: (calico-824402) Creating domain...
	I0812 12:05:20.188706   66240 main.go:141] libmachine: (calico-824402) define libvirt domain using xml: 
	I0812 12:05:20.188734   66240 main.go:141] libmachine: (calico-824402) <domain type='kvm'>
	I0812 12:05:20.188745   66240 main.go:141] libmachine: (calico-824402)   <name>calico-824402</name>
	I0812 12:05:20.188759   66240 main.go:141] libmachine: (calico-824402)   <memory unit='MiB'>3072</memory>
	I0812 12:05:20.188769   66240 main.go:141] libmachine: (calico-824402)   <vcpu>2</vcpu>
	I0812 12:05:20.188778   66240 main.go:141] libmachine: (calico-824402)   <features>
	I0812 12:05:20.188786   66240 main.go:141] libmachine: (calico-824402)     <acpi/>
	I0812 12:05:20.188800   66240 main.go:141] libmachine: (calico-824402)     <apic/>
	I0812 12:05:20.188815   66240 main.go:141] libmachine: (calico-824402)     <pae/>
	I0812 12:05:20.188825   66240 main.go:141] libmachine: (calico-824402)     
	I0812 12:05:20.188841   66240 main.go:141] libmachine: (calico-824402)   </features>
	I0812 12:05:20.188849   66240 main.go:141] libmachine: (calico-824402)   <cpu mode='host-passthrough'>
	I0812 12:05:20.188853   66240 main.go:141] libmachine: (calico-824402)   
	I0812 12:05:20.188881   66240 main.go:141] libmachine: (calico-824402)   </cpu>
	I0812 12:05:20.188890   66240 main.go:141] libmachine: (calico-824402)   <os>
	I0812 12:05:20.188911   66240 main.go:141] libmachine: (calico-824402)     <type>hvm</type>
	I0812 12:05:20.188919   66240 main.go:141] libmachine: (calico-824402)     <boot dev='cdrom'/>
	I0812 12:05:20.188926   66240 main.go:141] libmachine: (calico-824402)     <boot dev='hd'/>
	I0812 12:05:20.188934   66240 main.go:141] libmachine: (calico-824402)     <bootmenu enable='no'/>
	I0812 12:05:20.188941   66240 main.go:141] libmachine: (calico-824402)   </os>
	I0812 12:05:20.188954   66240 main.go:141] libmachine: (calico-824402)   <devices>
	I0812 12:05:20.188967   66240 main.go:141] libmachine: (calico-824402)     <disk type='file' device='cdrom'>
	I0812 12:05:20.188975   66240 main.go:141] libmachine: (calico-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/boot2docker.iso'/>
	I0812 12:05:20.188986   66240 main.go:141] libmachine: (calico-824402)       <target dev='hdc' bus='scsi'/>
	I0812 12:05:20.188994   66240 main.go:141] libmachine: (calico-824402)       <readonly/>
	I0812 12:05:20.189004   66240 main.go:141] libmachine: (calico-824402)     </disk>
	I0812 12:05:20.189014   66240 main.go:141] libmachine: (calico-824402)     <disk type='file' device='disk'>
	I0812 12:05:20.189028   66240 main.go:141] libmachine: (calico-824402)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:05:20.189044   66240 main.go:141] libmachine: (calico-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/calico-824402.rawdisk'/>
	I0812 12:05:20.189059   66240 main.go:141] libmachine: (calico-824402)       <target dev='hda' bus='virtio'/>
	I0812 12:05:20.189074   66240 main.go:141] libmachine: (calico-824402)     </disk>
	I0812 12:05:20.189085   66240 main.go:141] libmachine: (calico-824402)     <interface type='network'>
	I0812 12:05:20.189095   66240 main.go:141] libmachine: (calico-824402)       <source network='mk-calico-824402'/>
	I0812 12:05:20.189106   66240 main.go:141] libmachine: (calico-824402)       <model type='virtio'/>
	I0812 12:05:20.189114   66240 main.go:141] libmachine: (calico-824402)     </interface>
	I0812 12:05:20.189124   66240 main.go:141] libmachine: (calico-824402)     <interface type='network'>
	I0812 12:05:20.189155   66240 main.go:141] libmachine: (calico-824402)       <source network='default'/>
	I0812 12:05:20.189177   66240 main.go:141] libmachine: (calico-824402)       <model type='virtio'/>
	I0812 12:05:20.189199   66240 main.go:141] libmachine: (calico-824402)     </interface>
	I0812 12:05:20.189211   66240 main.go:141] libmachine: (calico-824402)     <serial type='pty'>
	I0812 12:05:20.189222   66240 main.go:141] libmachine: (calico-824402)       <target port='0'/>
	I0812 12:05:20.189230   66240 main.go:141] libmachine: (calico-824402)     </serial>
	I0812 12:05:20.189243   66240 main.go:141] libmachine: (calico-824402)     <console type='pty'>
	I0812 12:05:20.189255   66240 main.go:141] libmachine: (calico-824402)       <target type='serial' port='0'/>
	I0812 12:05:20.189266   66240 main.go:141] libmachine: (calico-824402)     </console>
	I0812 12:05:20.189277   66240 main.go:141] libmachine: (calico-824402)     <rng model='virtio'>
	I0812 12:05:20.189290   66240 main.go:141] libmachine: (calico-824402)       <backend model='random'>/dev/random</backend>
	I0812 12:05:20.189300   66240 main.go:141] libmachine: (calico-824402)     </rng>
	I0812 12:05:20.189309   66240 main.go:141] libmachine: (calico-824402)     
	I0812 12:05:20.189316   66240 main.go:141] libmachine: (calico-824402)     
	I0812 12:05:20.189328   66240 main.go:141] libmachine: (calico-824402)   </devices>
	I0812 12:05:20.189352   66240 main.go:141] libmachine: (calico-824402) </domain>
	I0812 12:05:20.189366   66240 main.go:141] libmachine: (calico-824402) 
	I0812 12:05:20.193789   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:8b:8a:0c in network default
	I0812 12:05:20.194592   66240 main.go:141] libmachine: (calico-824402) Ensuring networks are active...
	I0812 12:05:20.194616   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:20.195496   66240 main.go:141] libmachine: (calico-824402) Ensuring network default is active
	I0812 12:05:20.195925   66240 main.go:141] libmachine: (calico-824402) Ensuring network mk-calico-824402 is active
	I0812 12:05:20.196516   66240 main.go:141] libmachine: (calico-824402) Getting domain xml...
	I0812 12:05:20.197424   66240 main.go:141] libmachine: (calico-824402) Creating domain...
	I0812 12:05:21.706093   66240 main.go:141] libmachine: (calico-824402) Waiting to get IP...
	I0812 12:05:21.706953   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:21.707398   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:21.707413   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:21.707331   66402 retry.go:31] will retry after 284.452759ms: waiting for machine to come up
	I0812 12:05:21.994020   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:21.994690   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:21.994712   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:21.994602   66402 retry.go:31] will retry after 302.577819ms: waiting for machine to come up
	I0812 12:05:22.299233   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:22.299858   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:22.299891   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:22.299804   66402 retry.go:31] will retry after 354.379373ms: waiting for machine to come up
	I0812 12:05:22.655443   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:22.656052   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:22.656081   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:22.655993   66402 retry.go:31] will retry after 589.196533ms: waiting for machine to come up
	I0812 12:05:23.246509   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:23.247065   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:23.247108   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:23.247015   66402 retry.go:31] will retry after 484.614628ms: waiting for machine to come up
	I0812 12:05:23.733890   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:23.734457   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:23.734482   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:23.734429   66402 retry.go:31] will retry after 706.622168ms: waiting for machine to come up
	I0812 12:05:21.090809   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetIP
	I0812 12:05:21.094027   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:21.094446   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:21.094470   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:21.094768   65845 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0812 12:05:21.099184   65845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:05:21.113651   65845 kubeadm.go:883] updating cluster {Name:kindnet-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:05:21.113802   65845 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:05:21.113849   65845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:05:21.159975   65845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:05:21.160055   65845 ssh_runner.go:195] Run: which lz4
	I0812 12:05:21.164195   65845 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:05:21.168415   65845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:05:21.168447   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:05:22.588249   65845 crio.go:462] duration metric: took 1.424079799s to copy over tarball
	I0812 12:05:22.588335   65845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:05:20.422591   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:20.922509   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:21.422549   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:21.922651   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:22.423170   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:22.923184   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:23.422722   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:23.922195   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:24.422390   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:24.922272   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:25.422900   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:25.923154   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:26.423063   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:26.922180   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:27.422274   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:28.035796   65466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:28.435546   65466 kubeadm.go:1113] duration metric: took 14.163620785s to wait for elevateKubeSystemPrivileges
	I0812 12:05:28.435586   65466 kubeadm.go:394] duration metric: took 26.326200364s to StartCluster
	I0812 12:05:28.435607   65466 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:28.435692   65466 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:05:28.437005   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:28.437279   65466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 12:05:28.437298   65466 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:05:28.437394   65466 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 12:05:28.437480   65466 addons.go:69] Setting storage-provisioner=true in profile "auto-824402"
	I0812 12:05:28.437507   65466 addons.go:234] Setting addon storage-provisioner=true in "auto-824402"
	I0812 12:05:28.437512   65466 config.go:182] Loaded profile config "auto-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:05:28.437539   65466 host.go:66] Checking if "auto-824402" exists ...
	I0812 12:05:28.437560   65466 addons.go:69] Setting default-storageclass=true in profile "auto-824402"
	I0812 12:05:28.437586   65466 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-824402"
	I0812 12:05:28.437985   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:28.438016   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:28.437985   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:28.438096   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:28.439162   65466 out.go:177] * Verifying Kubernetes components...
	I0812 12:05:28.440466   65466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:28.458193   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0812 12:05:28.458201   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36589
	I0812 12:05:28.458798   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:28.458907   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:28.459385   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:05:28.459413   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:28.459601   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:05:28.459633   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:28.459745   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:28.460008   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:28.460203   65466 main.go:141] libmachine: (auto-824402) Calling .GetState
	I0812 12:05:28.460322   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:28.460352   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:28.463820   65466 addons.go:234] Setting addon default-storageclass=true in "auto-824402"
	I0812 12:05:28.463864   65466 host.go:66] Checking if "auto-824402" exists ...
	I0812 12:05:28.464268   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:28.464318   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:28.481812   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I0812 12:05:28.482281   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:28.483004   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:05:28.483025   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:28.483439   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:28.483645   65466 main.go:141] libmachine: (auto-824402) Calling .GetState
	I0812 12:05:28.485374   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0812 12:05:28.485700   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:05:28.485784   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:28.486256   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:05:28.486278   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:28.486594   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:28.487206   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:28.487241   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:28.487889   65466 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 12:05:24.443335   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:24.443932   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:24.443965   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:24.443877   66402 retry.go:31] will retry after 943.343269ms: waiting for machine to come up
	I0812 12:05:25.388998   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:25.389535   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:25.389570   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:25.389487   66402 retry.go:31] will retry after 947.120115ms: waiting for machine to come up
	I0812 12:05:26.338003   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:26.338578   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:26.338609   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:26.338528   66402 retry.go:31] will retry after 1.74110489s: waiting for machine to come up
	I0812 12:05:28.082467   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:28.082940   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:28.082967   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:28.082896   66402 retry.go:31] will retry after 1.41102741s: waiting for machine to come up
	I0812 12:05:28.489440   65466 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:05:28.489460   65466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 12:05:28.489479   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:05:28.493847   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:05:28.494455   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:05:28.494484   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:05:28.494700   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:05:28.494963   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:05:28.495145   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:05:28.495279   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:05:28.509046   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0812 12:05:28.509542   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:28.510144   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:05:28.510162   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:28.510748   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:28.510978   65466 main.go:141] libmachine: (auto-824402) Calling .GetState
	I0812 12:05:28.512821   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:05:28.513094   65466 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 12:05:28.513114   65466 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 12:05:28.513133   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:05:28.517027   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:05:28.517394   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:05:28.517457   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:05:28.517819   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:05:28.517994   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:05:28.518139   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:05:28.518254   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:05:28.736588   65466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:05:28.736653   65466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 12:05:28.813304   65466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 12:05:28.930033   65466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:05:29.058033   65466 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 12:05:29.058158   65466 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:29.058185   65466 main.go:141] libmachine: (auto-824402) Calling .Close
	I0812 12:05:29.058620   65466 main.go:141] libmachine: (auto-824402) DBG | Closing plugin on server side
	I0812 12:05:29.058702   65466 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:29.058725   65466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:29.058743   65466 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:29.058752   65466 main.go:141] libmachine: (auto-824402) Calling .Close
	I0812 12:05:29.059159   65466 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:29.059293   65466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:29.059245   65466 main.go:141] libmachine: (auto-824402) DBG | Closing plugin on server side
	I0812 12:05:29.059352   65466 node_ready.go:35] waiting up to 15m0s for node "auto-824402" to be "Ready" ...
	I0812 12:05:29.078485   65466 node_ready.go:49] node "auto-824402" has status "Ready":"True"
	I0812 12:05:29.078512   65466 node_ready.go:38] duration metric: took 19.138981ms for node "auto-824402" to be "Ready" ...
	I0812 12:05:29.078523   65466 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:05:29.089225   65466 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-62srq" in "kube-system" namespace to be "Ready" ...
	I0812 12:05:29.094082   65466 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:29.094102   65466 main.go:141] libmachine: (auto-824402) Calling .Close
	I0812 12:05:29.094369   65466 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:29.094419   65466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:29.094444   65466 main.go:141] libmachine: (auto-824402) DBG | Closing plugin on server side
	I0812 12:05:29.431701   65466 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:29.431732   65466 main.go:141] libmachine: (auto-824402) Calling .Close
	I0812 12:05:29.432033   65466 main.go:141] libmachine: (auto-824402) DBG | Closing plugin on server side
	I0812 12:05:29.432074   65466 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:29.432089   65466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:29.432098   65466 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:29.432109   65466 main.go:141] libmachine: (auto-824402) Calling .Close
	I0812 12:05:29.432958   65466 main.go:141] libmachine: (auto-824402) DBG | Closing plugin on server side
	I0812 12:05:29.432976   65466 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:29.433035   65466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:29.436201   65466 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 12:05:25.180633   65845 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.592269274s)
	I0812 12:05:25.180670   65845 crio.go:469] duration metric: took 2.592388757s to extract the tarball
	I0812 12:05:25.180679   65845 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:05:25.222251   65845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:05:25.272452   65845 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:05:25.272482   65845 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:05:25.272494   65845 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.30.3 crio true true} ...
	I0812 12:05:25.272612   65845 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-824402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0812 12:05:25.272679   65845 ssh_runner.go:195] Run: crio config
	I0812 12:05:25.321353   65845 cni.go:84] Creating CNI manager for "kindnet"
	I0812 12:05:25.321382   65845 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:05:25.321411   65845 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-824402 NodeName:kindnet-824402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:05:25.321587   65845 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-824402"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:05:25.321660   65845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:05:25.332272   65845 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:05:25.332352   65845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:05:25.343112   65845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0812 12:05:25.362326   65845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:05:25.381164   65845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0812 12:05:25.400756   65845 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0812 12:05:25.405049   65845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:05:25.417615   65845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:25.564851   65845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:05:25.583991   65845 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402 for IP: 192.168.72.181
	I0812 12:05:25.584023   65845 certs.go:194] generating shared ca certs ...
	I0812 12:05:25.584047   65845 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:25.584254   65845 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 12:05:25.584322   65845 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 12:05:25.584354   65845 certs.go:256] generating profile certs ...
	I0812 12:05:25.584443   65845 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.key
	I0812 12:05:25.584474   65845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.crt with IP's: []
	I0812 12:05:25.799128   65845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.crt ...
	I0812 12:05:25.799159   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.crt: {Name:mkafc877d1af61406f235aa44c1d42222d50d0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:25.800480   65845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.key ...
	I0812 12:05:25.800509   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/client.key: {Name:mkda9919788753d268fed1ed30e9d5cf860f2500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:25.800653   65845 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key.ba2bd9de
	I0812 12:05:25.800677   65845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt.ba2bd9de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.181]
	I0812 12:05:26.053304   65845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt.ba2bd9de ...
	I0812 12:05:26.053339   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt.ba2bd9de: {Name:mkd18221e55e5d71c689973d44878f139a3924ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:26.053552   65845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key.ba2bd9de ...
	I0812 12:05:26.053571   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key.ba2bd9de: {Name:mk36d86fc1b484b847f9dccb0c8b2595ec9ad9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:26.053679   65845 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt.ba2bd9de -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt
	I0812 12:05:26.053780   65845 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key.ba2bd9de -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key
	I0812 12:05:26.053873   65845 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.key
	I0812 12:05:26.053898   65845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.crt with IP's: []
	I0812 12:05:26.350383   65845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.crt ...
	I0812 12:05:26.350414   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.crt: {Name:mk7274ee5a0e550c16f8a42a6c0742da4a3d6fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:26.350609   65845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.key ...
	I0812 12:05:26.350625   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.key: {Name:mk32a7d460a7ada7d35b07c4e7422a3f48c6bcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:26.350855   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 12:05:26.350909   65845 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 12:05:26.350924   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 12:05:26.350951   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 12:05:26.350973   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:05:26.350998   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 12:05:26.351036   65845 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:05:26.351669   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:05:26.381910   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:05:26.421115   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:05:26.458243   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:05:26.488431   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 12:05:26.516406   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:05:26.544304   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:05:26.571964   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 12:05:26.604901   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:05:26.632882   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 12:05:26.659609   65845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 12:05:26.686333   65845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:05:26.704277   65845 ssh_runner.go:195] Run: openssl version
	I0812 12:05:26.710411   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:05:26.722332   65845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:26.729110   65845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:26.729181   65845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:26.735699   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:05:26.750120   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 12:05:26.762209   65845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 12:05:26.767379   65845 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 12:05:26.767445   65845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 12:05:26.773360   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 12:05:26.789377   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 12:05:26.808424   65845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 12:05:26.813325   65845 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 12:05:26.813391   65845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 12:05:26.819614   65845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:05:26.831054   65845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:05:26.835772   65845 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:05:26.835833   65845 kubeadm.go:392] StartCluster: {Name:kindnet-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:05:26.835923   65845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:05:26.835966   65845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:05:26.871421   65845 cri.go:89] found id: ""
	I0812 12:05:26.871529   65845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:05:26.882611   65845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:05:26.894011   65845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:05:26.904562   65845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:05:26.904592   65845 kubeadm.go:157] found existing configuration files:
	
	I0812 12:05:26.904644   65845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:05:26.914355   65845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:05:26.914412   65845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:05:26.924444   65845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:05:26.934772   65845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:05:26.934833   65845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:05:26.948878   65845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:05:26.962208   65845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:05:26.962271   65845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:05:26.975310   65845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:05:26.985637   65845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:05:26.985709   65845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:05:26.999013   65845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:05:27.226035   65845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 12:05:29.437615   65466 addons.go:510] duration metric: took 1.000212739s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 12:05:29.563088   65466 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-824402" context rescaled to 1 replicas
	I0812 12:05:29.495200   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:29.495800   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:29.495827   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:29.495751   66402 retry.go:31] will retry after 1.924721995s: waiting for machine to come up
	I0812 12:05:31.422106   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:31.422618   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:31.422650   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:31.422533   66402 retry.go:31] will retry after 2.492349447s: waiting for machine to come up
	I0812 12:05:33.917709   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:33.918303   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:33.918328   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:33.918242   66402 retry.go:31] will retry after 2.945377877s: waiting for machine to come up
	I0812 12:05:31.092822   65466 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-62srq" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-62srq" not found
	I0812 12:05:31.092850   65466 pod_ready.go:81] duration metric: took 2.003585338s for pod "coredns-7db6d8ff4d-62srq" in "kube-system" namespace to be "Ready" ...
	E0812 12:05:31.092860   65466 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-62srq" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-62srq" not found
	I0812 12:05:31.092886   65466 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace to be "Ready" ...
	I0812 12:05:33.100401   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:35.101414   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:37.947222   65845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 12:05:37.947277   65845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 12:05:37.947378   65845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 12:05:37.947505   65845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 12:05:37.947639   65845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 12:05:37.947739   65845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 12:05:37.949232   65845 out.go:204]   - Generating certificates and keys ...
	I0812 12:05:37.949312   65845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 12:05:37.949375   65845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 12:05:37.949449   65845 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 12:05:37.949552   65845 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 12:05:37.949648   65845 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 12:05:37.949727   65845 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 12:05:37.949803   65845 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 12:05:37.949983   65845 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-824402 localhost] and IPs [192.168.72.181 127.0.0.1 ::1]
	I0812 12:05:37.950048   65845 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 12:05:37.950150   65845 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-824402 localhost] and IPs [192.168.72.181 127.0.0.1 ::1]
	I0812 12:05:37.950246   65845 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 12:05:37.950348   65845 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 12:05:37.950416   65845 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 12:05:37.950494   65845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 12:05:37.950578   65845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 12:05:37.950669   65845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 12:05:37.950744   65845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 12:05:37.950841   65845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 12:05:37.950915   65845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 12:05:37.951035   65845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 12:05:37.951123   65845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 12:05:37.952772   65845 out.go:204]   - Booting up control plane ...
	I0812 12:05:37.952849   65845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 12:05:37.952940   65845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 12:05:37.953018   65845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 12:05:37.953141   65845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 12:05:37.953240   65845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 12:05:37.953295   65845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 12:05:37.953487   65845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 12:05:37.953594   65845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 12:05:37.953686   65845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.62137ms
	I0812 12:05:37.953773   65845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 12:05:37.953836   65845 kubeadm.go:310] [api-check] The API server is healthy after 5.501432407s
	I0812 12:05:37.953940   65845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 12:05:37.954051   65845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 12:05:37.954133   65845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 12:05:37.954356   65845 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-824402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 12:05:37.954412   65845 kubeadm.go:310] [bootstrap-token] Using token: c5nbps.9zprj10v71epulzx
	I0812 12:05:37.955788   65845 out.go:204]   - Configuring RBAC rules ...
	I0812 12:05:37.955930   65845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 12:05:37.956049   65845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 12:05:37.956214   65845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 12:05:37.956388   65845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 12:05:37.956544   65845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 12:05:37.956645   65845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 12:05:37.956750   65845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 12:05:37.956789   65845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 12:05:37.956842   65845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 12:05:37.956852   65845 kubeadm.go:310] 
	I0812 12:05:37.956941   65845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 12:05:37.956951   65845 kubeadm.go:310] 
	I0812 12:05:37.957052   65845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 12:05:37.957066   65845 kubeadm.go:310] 
	I0812 12:05:37.957111   65845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 12:05:37.957194   65845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 12:05:37.957264   65845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 12:05:37.957275   65845 kubeadm.go:310] 
	I0812 12:05:37.957362   65845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 12:05:37.957369   65845 kubeadm.go:310] 
	I0812 12:05:37.957410   65845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 12:05:37.957416   65845 kubeadm.go:310] 
	I0812 12:05:37.957485   65845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 12:05:37.957604   65845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 12:05:37.957679   65845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 12:05:37.957688   65845 kubeadm.go:310] 
	I0812 12:05:37.957779   65845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 12:05:37.957877   65845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 12:05:37.957885   65845 kubeadm.go:310] 
	I0812 12:05:37.957957   65845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c5nbps.9zprj10v71epulzx \
	I0812 12:05:37.958116   65845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 12:05:37.958163   65845 kubeadm.go:310] 	--control-plane 
	I0812 12:05:37.958175   65845 kubeadm.go:310] 
	I0812 12:05:37.958270   65845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 12:05:37.958282   65845 kubeadm.go:310] 
	I0812 12:05:37.958384   65845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c5nbps.9zprj10v71epulzx \
	I0812 12:05:37.958527   65845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 12:05:37.958548   65845 cni.go:84] Creating CNI manager for "kindnet"
	I0812 12:05:37.960895   65845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 12:05:36.866219   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:36.866681   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find current IP address of domain calico-824402 in network mk-calico-824402
	I0812 12:05:36.866710   66240 main.go:141] libmachine: (calico-824402) DBG | I0812 12:05:36.866633   66402 retry.go:31] will retry after 4.878608892s: waiting for machine to come up
	I0812 12:05:37.962133   65845 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0812 12:05:37.967961   65845 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 12:05:37.967986   65845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0812 12:05:37.987678   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 12:05:38.281933   65845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 12:05:38.282040   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:38.282073   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-824402 minikube.k8s.io/updated_at=2024_08_12T12_05_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=kindnet-824402 minikube.k8s.io/primary=true
	I0812 12:05:38.485503   65845 ops.go:34] apiserver oom_adj: -16
	I0812 12:05:38.485536   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:38.985748   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:39.486114   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:39.986569   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:37.599225   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:39.600462   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:41.746509   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.747049   66240 main.go:141] libmachine: (calico-824402) Found IP for machine: 192.168.61.88
	I0812 12:05:41.747070   66240 main.go:141] libmachine: (calico-824402) Reserving static IP address...
	I0812 12:05:41.747115   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has current primary IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.747619   66240 main.go:141] libmachine: (calico-824402) DBG | unable to find host DHCP lease matching {name: "calico-824402", mac: "52:54:00:59:11:b8", ip: "192.168.61.88"} in network mk-calico-824402
	I0812 12:05:41.836526   66240 main.go:141] libmachine: (calico-824402) DBG | Getting to WaitForSSH function...
	I0812 12:05:41.836553   66240 main.go:141] libmachine: (calico-824402) Reserved static IP address: 192.168.61.88
	I0812 12:05:41.836567   66240 main.go:141] libmachine: (calico-824402) Waiting for SSH to be available...
	I0812 12:05:41.839779   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.840214   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:41.840246   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.840409   66240 main.go:141] libmachine: (calico-824402) DBG | Using SSH client type: external
	I0812 12:05:41.840431   66240 main.go:141] libmachine: (calico-824402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa (-rw-------)
	I0812 12:05:41.840468   66240 main.go:141] libmachine: (calico-824402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:05:41.840482   66240 main.go:141] libmachine: (calico-824402) DBG | About to run SSH command:
	I0812 12:05:41.840495   66240 main.go:141] libmachine: (calico-824402) DBG | exit 0
	I0812 12:05:41.961699   66240 main.go:141] libmachine: (calico-824402) DBG | SSH cmd err, output: <nil>: 
	I0812 12:05:41.962062   66240 main.go:141] libmachine: (calico-824402) KVM machine creation complete!
	I0812 12:05:41.962435   66240 main.go:141] libmachine: (calico-824402) Calling .GetConfigRaw
	I0812 12:05:41.963039   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:41.963312   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:41.963474   66240 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:05:41.963492   66240 main.go:141] libmachine: (calico-824402) Calling .GetState
	I0812 12:05:41.965275   66240 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:05:41.965294   66240 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:05:41.965302   66240 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:05:41.965311   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:41.968026   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.968476   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:41.968499   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:41.968721   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:41.968985   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:41.969217   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:41.969390   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:41.969588   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:41.969775   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:41.969785   66240 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:05:42.068505   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:05:42.068534   66240 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:05:42.068545   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.071534   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.071984   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.072013   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.072341   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:42.072561   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.072731   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.072965   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:42.073158   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:42.073354   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:42.073367   66240 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:05:42.177663   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:05:42.177769   66240 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:05:42.177787   66240 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:05:42.177801   66240 main.go:141] libmachine: (calico-824402) Calling .GetMachineName
	I0812 12:05:42.178082   66240 buildroot.go:166] provisioning hostname "calico-824402"
	I0812 12:05:42.178110   66240 main.go:141] libmachine: (calico-824402) Calling .GetMachineName
	I0812 12:05:42.178321   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.181194   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.181645   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.181680   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.181878   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:42.182102   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.182279   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.182453   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:42.182675   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:42.182876   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:42.182896   66240 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-824402 && echo "calico-824402" | sudo tee /etc/hostname
	I0812 12:05:42.295471   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-824402
	
	I0812 12:05:42.295512   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.298689   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.299091   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.299119   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.299371   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:42.299636   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.299831   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.299995   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:42.300222   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:42.300436   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:42.300462   66240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-824402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-824402/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-824402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:05:42.410260   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:05:42.410292   66240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 12:05:42.410315   66240 buildroot.go:174] setting up certificates
	I0812 12:05:42.410353   66240 provision.go:84] configureAuth start
	I0812 12:05:42.410365   66240 main.go:141] libmachine: (calico-824402) Calling .GetMachineName
	I0812 12:05:42.410689   66240 main.go:141] libmachine: (calico-824402) Calling .GetIP
	I0812 12:05:42.413488   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.413868   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.413894   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.414040   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.416793   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.417102   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.417124   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.417259   66240 provision.go:143] copyHostCerts
	I0812 12:05:42.417333   66240 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 12:05:42.417342   66240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 12:05:42.417406   66240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 12:05:42.417489   66240 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 12:05:42.417496   66240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 12:05:42.417521   66240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 12:05:42.417584   66240 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 12:05:42.417594   66240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 12:05:42.417623   66240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 12:05:42.417693   66240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.calico-824402 san=[127.0.0.1 192.168.61.88 calico-824402 localhost minikube]
	I0812 12:05:42.667527   66240 provision.go:177] copyRemoteCerts
	I0812 12:05:42.667588   66240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:05:42.667621   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.670569   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.670983   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.671012   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.671257   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:42.671458   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.671608   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:42.671758   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:05:42.751065   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:05:42.775520   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:05:42.800331   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 12:05:42.824051   66240 provision.go:87] duration metric: took 413.681048ms to configureAuth
	I0812 12:05:42.824088   66240 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:05:42.824257   66240 config.go:182] Loaded profile config "calico-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:05:42.824354   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:42.827716   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.828122   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:42.828148   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:42.828302   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:42.828545   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.828733   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:42.828916   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:42.829058   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:42.829215   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:42.829231   66240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:05:43.087644   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:05:43.087672   66240 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:05:43.087682   66240 main.go:141] libmachine: (calico-824402) Calling .GetURL
	I0812 12:05:43.089306   66240 main.go:141] libmachine: (calico-824402) DBG | Using libvirt version 6000000
	I0812 12:05:43.091558   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.091898   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.091931   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.092036   66240 main.go:141] libmachine: Docker is up and running!
	I0812 12:05:43.092049   66240 main.go:141] libmachine: Reticulating splines...
	I0812 12:05:43.092056   66240 client.go:171] duration metric: took 23.575700826s to LocalClient.Create
	I0812 12:05:43.092080   66240 start.go:167] duration metric: took 23.575766486s to libmachine.API.Create "calico-824402"
	I0812 12:05:43.092092   66240 start.go:293] postStartSetup for "calico-824402" (driver="kvm2")
	I0812 12:05:43.092106   66240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:05:43.092118   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:43.092346   66240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:05:43.092366   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:43.094923   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.095352   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.095382   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.095571   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:43.095783   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:43.095984   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:43.096177   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:05:43.176259   66240 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:05:43.180698   66240 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:05:43.180726   66240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 12:05:43.180789   66240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 12:05:43.180909   66240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 12:05:43.181070   66240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:05:43.191803   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:05:43.217260   66240 start.go:296] duration metric: took 125.150311ms for postStartSetup
	I0812 12:05:43.217350   66240 main.go:141] libmachine: (calico-824402) Calling .GetConfigRaw
	I0812 12:05:43.217961   66240 main.go:141] libmachine: (calico-824402) Calling .GetIP
	I0812 12:05:43.220797   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.221248   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.221278   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.221522   66240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/config.json ...
	I0812 12:05:43.221798   66240 start.go:128] duration metric: took 23.73176086s to createHost
	I0812 12:05:43.221825   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:43.224106   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.224423   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.224436   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.224636   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:43.224888   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:43.225046   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:43.225239   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:43.225437   66240 main.go:141] libmachine: Using SSH client type: native
	I0812 12:05:43.225642   66240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0812 12:05:43.225657   66240 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:05:43.325627   66240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464343.296448786
	
	I0812 12:05:43.325657   66240 fix.go:216] guest clock: 1723464343.296448786
	I0812 12:05:43.325668   66240 fix.go:229] Guest: 2024-08-12 12:05:43.296448786 +0000 UTC Remote: 2024-08-12 12:05:43.221813024 +0000 UTC m=+43.964553583 (delta=74.635762ms)
	I0812 12:05:43.325716   66240 fix.go:200] guest clock delta is within tolerance: 74.635762ms
	I0812 12:05:43.325724   66240 start.go:83] releasing machines lock for "calico-824402", held for 23.835802917s
	I0812 12:05:43.325750   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:43.326049   66240 main.go:141] libmachine: (calico-824402) Calling .GetIP
	I0812 12:05:43.329221   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.329632   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.329667   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.329837   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:43.330378   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:43.330606   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:05:43.330694   66240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:05:43.330741   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:43.330837   66240 ssh_runner.go:195] Run: cat /version.json
	I0812 12:05:43.330865   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:05:43.333696   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.333836   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.334109   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.334137   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.334246   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:43.334361   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:43.334382   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:43.334427   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:43.334578   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:05:43.334582   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:43.334745   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:05:43.334758   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:05:43.334926   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:05:43.335074   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:05:43.451389   66240 ssh_runner.go:195] Run: systemctl --version
	I0812 12:05:43.457391   66240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:05:43.621061   66240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:05:43.627589   66240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:05:43.627667   66240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:05:43.646319   66240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:05:43.646344   66240 start.go:495] detecting cgroup driver to use...
	I0812 12:05:43.646413   66240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:05:43.664343   66240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:05:43.678035   66240 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:05:43.678107   66240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:05:43.691550   66240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:05:43.705782   66240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:05:43.831254   66240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:05:43.986693   66240 docker.go:233] disabling docker service ...
	I0812 12:05:43.986757   66240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:05:44.002655   66240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:05:44.020443   66240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:05:44.144083   66240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:05:44.265720   66240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:05:44.280202   66240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:05:44.302289   66240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:05:44.302372   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.314342   66240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:05:44.314430   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.325905   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.338532   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.350834   66240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:05:44.363270   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.374142   66240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.393434   66240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:05:44.406520   66240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:05:44.417664   66240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:05:44.417733   66240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:05:44.431276   66240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:05:44.442056   66240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:44.559718   66240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:05:44.711824   66240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:05:44.711890   66240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:05:44.716732   66240 start.go:563] Will wait 60s for crictl version
	I0812 12:05:44.716808   66240 ssh_runner.go:195] Run: which crictl
	I0812 12:05:44.720784   66240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:05:44.765882   66240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:05:44.766037   66240 ssh_runner.go:195] Run: crio --version
	I0812 12:05:44.795707   66240 ssh_runner.go:195] Run: crio --version
	I0812 12:05:44.827856   66240 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:05:40.485674   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:40.985632   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:41.485773   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:41.985629   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:42.485763   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:42.986114   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:43.486116   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:43.986261   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:44.486097   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:44.986271   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:42.099865   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:44.100312   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:44.829387   66240 main.go:141] libmachine: (calico-824402) Calling .GetIP
	I0812 12:05:44.832127   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:44.832537   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:05:44.832562   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:05:44.832855   66240 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0812 12:05:44.837068   66240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:05:44.850021   66240 kubeadm.go:883] updating cluster {Name:calico-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:05:44.850129   66240 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:05:44.850177   66240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:05:44.883798   66240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:05:44.883872   66240 ssh_runner.go:195] Run: which lz4
	I0812 12:05:44.888051   66240 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:05:44.892083   66240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:05:44.892120   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:05:46.275875   66240 crio.go:462] duration metric: took 1.387867894s to copy over tarball
	I0812 12:05:46.275975   66240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:05:48.827243   66240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551242302s)
	I0812 12:05:48.827279   66240 crio.go:469] duration metric: took 2.551379379s to extract the tarball
	I0812 12:05:48.827289   66240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:05:48.865987   66240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:05:48.913447   66240 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:05:48.913478   66240 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:05:48.913488   66240 kubeadm.go:934] updating node { 192.168.61.88 8443 v1.30.3 crio true true} ...
	I0812 12:05:48.913607   66240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-824402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:calico-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0812 12:05:48.913679   66240 ssh_runner.go:195] Run: crio config
	I0812 12:05:48.960669   66240 cni.go:84] Creating CNI manager for "calico"
	I0812 12:05:48.960698   66240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:05:48.960730   66240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-824402 NodeName:calico-824402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:05:48.960921   66240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-824402"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:05:48.960989   66240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:05:48.970997   66240 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:05:48.971085   66240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:05:48.981251   66240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0812 12:05:48.998933   66240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:05:49.015693   66240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0812 12:05:49.032917   66240 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0812 12:05:49.037222   66240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:05:49.049829   66240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:49.191612   66240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:05:49.210303   66240 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402 for IP: 192.168.61.88
	I0812 12:05:49.210336   66240 certs.go:194] generating shared ca certs ...
	I0812 12:05:49.210350   66240 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.210531   66240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 12:05:49.210582   66240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 12:05:49.210595   66240 certs.go:256] generating profile certs ...
	I0812 12:05:49.210652   66240 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.key
	I0812 12:05:49.210683   66240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.crt with IP's: []
	I0812 12:05:45.485570   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:45.985653   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:46.486621   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:46.985754   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:47.486118   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:47.986225   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:48.486094   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:48.985691   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:49.486002   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:49.986467   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:46.601964   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:49.100495   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:50.485937   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:51.239314   65845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:05:52.075821   65845 kubeadm.go:1113] duration metric: took 13.793852679s to wait for elevateKubeSystemPrivileges
	I0812 12:05:52.075854   65845 kubeadm.go:394] duration metric: took 25.240024701s to StartCluster
	I0812 12:05:52.075870   65845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:52.075956   65845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:05:52.077618   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:52.077866   65845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 12:05:52.077889   65845 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:05:52.077975   65845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 12:05:52.078071   65845 addons.go:69] Setting storage-provisioner=true in profile "kindnet-824402"
	I0812 12:05:52.078103   65845 addons.go:234] Setting addon storage-provisioner=true in "kindnet-824402"
	I0812 12:05:52.078096   65845 addons.go:69] Setting default-storageclass=true in profile "kindnet-824402"
	I0812 12:05:52.078121   65845 config.go:182] Loaded profile config "kindnet-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:05:52.078137   65845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-824402"
	I0812 12:05:52.078142   65845 host.go:66] Checking if "kindnet-824402" exists ...
	I0812 12:05:52.078620   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:52.078644   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:52.078701   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:52.078744   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:52.079392   65845 out.go:177] * Verifying Kubernetes components...
	I0812 12:05:52.080710   65845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:05:52.096192   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0812 12:05:52.096341   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0812 12:05:52.097033   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:52.097082   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:52.098234   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:05:52.098257   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:52.098304   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:05:52.098323   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:52.098801   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:52.099033   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetState
	I0812 12:05:52.102809   65845 addons.go:234] Setting addon default-storageclass=true in "kindnet-824402"
	I0812 12:05:52.102848   65845 host.go:66] Checking if "kindnet-824402" exists ...
	I0812 12:05:52.103137   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:52.103171   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:52.103380   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:52.104060   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:52.104094   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:52.124028   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0812 12:05:52.124083   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0812 12:05:52.124450   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:52.124545   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:52.125004   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:05:52.125027   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:52.125148   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:05:52.125161   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:52.125332   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:52.125534   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:52.125680   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetState
	I0812 12:05:52.125976   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:05:52.126018   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:05:52.127991   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:52.130267   65845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 12:05:52.131862   65845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:05:52.131879   65845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 12:05:52.131902   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:52.134911   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:52.135321   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:52.135370   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:52.135611   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:52.135801   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:52.135963   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:52.136157   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:52.145825   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0812 12:05:52.147052   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:05:52.147649   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:05:52.147673   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:05:52.148101   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:05:52.148313   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetState
	I0812 12:05:52.150538   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:05:52.150771   65845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 12:05:52.150790   65845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 12:05:52.150809   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHHostname
	I0812 12:05:52.154036   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:52.154473   65845 main.go:141] libmachine: (kindnet-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:eb:02", ip: ""} in network mk-kindnet-824402: {Iface:virbr3 ExpiryTime:2024-08-12 13:05:09 +0000 UTC Type:0 Mac:52:54:00:3a:eb:02 Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:kindnet-824402 Clientid:01:52:54:00:3a:eb:02}
	I0812 12:05:52.154495   65845 main.go:141] libmachine: (kindnet-824402) DBG | domain kindnet-824402 has defined IP address 192.168.72.181 and MAC address 52:54:00:3a:eb:02 in network mk-kindnet-824402
	I0812 12:05:52.154748   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHPort
	I0812 12:05:52.154868   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHKeyPath
	I0812 12:05:52.155061   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetSSHUsername
	I0812 12:05:52.155193   65845 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402/id_rsa Username:docker}
	I0812 12:05:52.382775   65845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:05:52.383017   65845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 12:05:52.418782   65845 node_ready.go:35] waiting up to 15m0s for node "kindnet-824402" to be "Ready" ...
	I0812 12:05:52.484100   65845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 12:05:52.518190   65845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:05:52.779165   65845 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0812 12:05:52.843937   65845 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:52.843963   65845 main.go:141] libmachine: (kindnet-824402) Calling .Close
	I0812 12:05:52.844335   65845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:52.844357   65845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:52.844367   65845 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:52.844376   65845 main.go:141] libmachine: (kindnet-824402) Calling .Close
	I0812 12:05:52.844650   65845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:52.844710   65845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:52.844746   65845 main.go:141] libmachine: (kindnet-824402) DBG | Closing plugin on server side
	I0812 12:05:52.885449   65845 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:52.885474   65845 main.go:141] libmachine: (kindnet-824402) Calling .Close
	I0812 12:05:52.885831   65845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:52.885851   65845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:53.019550   65845 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:53.019575   65845 main.go:141] libmachine: (kindnet-824402) Calling .Close
	I0812 12:05:53.019912   65845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:53.019963   65845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:53.019983   65845 main.go:141] libmachine: Making call to close driver server
	I0812 12:05:53.020001   65845 main.go:141] libmachine: (kindnet-824402) Calling .Close
	I0812 12:05:53.020307   65845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:05:53.020329   65845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:05:53.022188   65845 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 12:05:49.464998   66240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.crt ...
	I0812 12:05:49.465028   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.crt: {Name:mk8f3436d935b7f815bf4cb32258a9c2a23d7fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.465238   66240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.key ...
	I0812 12:05:49.465261   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/client.key: {Name:mk42046f4e99f693aaef40ebd1b0bf020ad6bc06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.465399   66240 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key.ff8883cd
	I0812 12:05:49.465416   66240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt.ff8883cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.88]
	I0812 12:05:49.666250   66240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt.ff8883cd ...
	I0812 12:05:49.666282   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt.ff8883cd: {Name:mk05ddc7f3795e47d66bd785431073f21e9a171e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.666475   66240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key.ff8883cd ...
	I0812 12:05:49.666497   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key.ff8883cd: {Name:mk4253ab83ce2466cdbe6f402217baedb9edd52a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.666593   66240 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt.ff8883cd -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt
	I0812 12:05:49.666713   66240 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key.ff8883cd -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key
	I0812 12:05:49.666798   66240 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.key
	I0812 12:05:49.666815   66240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.crt with IP's: []
	I0812 12:05:49.806465   66240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.crt ...
	I0812 12:05:49.806496   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.crt: {Name:mk9cd948069f4126f5a13fe117cb919893b6ea85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.806647   66240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.key ...
	I0812 12:05:49.806657   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.key: {Name:mk839bd1749059b8b82e5c8d58dc8b8d209d5784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:05:49.806834   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 12:05:49.806870   66240 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 12:05:49.806883   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 12:05:49.806903   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 12:05:49.806924   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:05:49.806945   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 12:05:49.806985   66240 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:05:49.807535   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:05:49.841342   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:05:49.868117   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:05:49.897584   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:05:49.929223   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 12:05:49.961460   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:05:49.989658   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:05:50.016837   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/calico-824402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:05:50.044531   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:05:50.071803   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 12:05:50.103854   66240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 12:05:50.131310   66240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:05:50.150440   66240 ssh_runner.go:195] Run: openssl version
	I0812 12:05:50.157017   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 12:05:50.168396   66240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 12:05:50.173223   66240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 12:05:50.173299   66240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 12:05:50.179537   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:05:50.192545   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:05:50.204130   66240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:50.210049   66240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:50.210122   66240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:05:50.216173   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:05:50.228389   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 12:05:50.239765   66240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 12:05:50.244476   66240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 12:05:50.244552   66240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 12:05:50.250888   66240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 12:05:50.263452   66240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:05:50.268387   66240 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:05:50.268458   66240 kubeadm.go:392] StartCluster: {Name:calico-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:calico-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:05:50.268556   66240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:05:50.268647   66240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:05:50.313155   66240 cri.go:89] found id: ""
	I0812 12:05:50.313236   66240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:05:50.324005   66240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:05:50.334639   66240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:05:50.344611   66240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:05:50.344630   66240 kubeadm.go:157] found existing configuration files:
	
	I0812 12:05:50.344682   66240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:05:50.354444   66240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:05:50.354513   66240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:05:50.364896   66240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:05:50.375286   66240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:05:50.375349   66240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:05:50.386939   66240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:05:50.397441   66240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:05:50.397514   66240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:05:50.409452   66240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:05:50.419322   66240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:05:50.419429   66240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:05:50.430983   66240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:05:50.648430   66240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 12:05:53.023611   65845 addons.go:510] duration metric: took 945.63966ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 12:05:53.283804   65845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-824402" context rescaled to 1 replicas
	I0812 12:05:54.729413   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:05:51.610197   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:54.100798   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:56.923924   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:05:59.422498   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:05:56.600427   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:05:59.099990   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:06:01.770051   66240 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 12:06:01.770124   66240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 12:06:01.770261   66240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 12:06:01.770508   66240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 12:06:01.770666   66240 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 12:06:01.770765   66240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 12:06:01.772486   66240 out.go:204]   - Generating certificates and keys ...
	I0812 12:06:01.772588   66240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 12:06:01.772670   66240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 12:06:01.772730   66240 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 12:06:01.772778   66240 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 12:06:01.772832   66240 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 12:06:01.772903   66240 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 12:06:01.772960   66240 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 12:06:01.773083   66240 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-824402 localhost] and IPs [192.168.61.88 127.0.0.1 ::1]
	I0812 12:06:01.773155   66240 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 12:06:01.773284   66240 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-824402 localhost] and IPs [192.168.61.88 127.0.0.1 ::1]
	I0812 12:06:01.773363   66240 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 12:06:01.773443   66240 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 12:06:01.773510   66240 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 12:06:01.773578   66240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 12:06:01.773649   66240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 12:06:01.773729   66240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 12:06:01.773805   66240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 12:06:01.773897   66240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 12:06:01.773957   66240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 12:06:01.774026   66240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 12:06:01.774099   66240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 12:06:01.775779   66240 out.go:204]   - Booting up control plane ...
	I0812 12:06:01.775882   66240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 12:06:01.775964   66240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 12:06:01.776040   66240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 12:06:01.776166   66240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 12:06:01.776282   66240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 12:06:01.776347   66240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 12:06:01.776511   66240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 12:06:01.776579   66240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 12:06:01.776660   66240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.988885ms
	I0812 12:06:01.776735   66240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 12:06:01.776786   66240 kubeadm.go:310] [api-check] The API server is healthy after 6.000915292s
	I0812 12:06:01.776925   66240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 12:06:01.777085   66240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 12:06:01.777165   66240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 12:06:01.777362   66240 kubeadm.go:310] [mark-control-plane] Marking the node calico-824402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 12:06:01.777450   66240 kubeadm.go:310] [bootstrap-token] Using token: qyzcx5.hjwgx8k9salx56n6
	I0812 12:06:01.779150   66240 out.go:204]   - Configuring RBAC rules ...
	I0812 12:06:01.779274   66240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 12:06:01.779389   66240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 12:06:01.779541   66240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 12:06:01.779653   66240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 12:06:01.779765   66240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 12:06:01.779837   66240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 12:06:01.779959   66240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 12:06:01.780029   66240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 12:06:01.780101   66240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 12:06:01.780110   66240 kubeadm.go:310] 
	I0812 12:06:01.780181   66240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 12:06:01.780190   66240 kubeadm.go:310] 
	I0812 12:06:01.780283   66240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 12:06:01.780292   66240 kubeadm.go:310] 
	I0812 12:06:01.780357   66240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 12:06:01.780437   66240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 12:06:01.780510   66240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 12:06:01.780520   66240 kubeadm.go:310] 
	I0812 12:06:01.780614   66240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 12:06:01.780624   66240 kubeadm.go:310] 
	I0812 12:06:01.780711   66240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 12:06:01.780725   66240 kubeadm.go:310] 
	I0812 12:06:01.780815   66240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 12:06:01.780920   66240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 12:06:01.780993   66240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 12:06:01.781000   66240 kubeadm.go:310] 
	I0812 12:06:01.781069   66240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 12:06:01.781134   66240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 12:06:01.781139   66240 kubeadm.go:310] 
	I0812 12:06:01.781218   66240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qyzcx5.hjwgx8k9salx56n6 \
	I0812 12:06:01.781330   66240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 12:06:01.781351   66240 kubeadm.go:310] 	--control-plane 
	I0812 12:06:01.781359   66240 kubeadm.go:310] 
	I0812 12:06:01.781464   66240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 12:06:01.781474   66240 kubeadm.go:310] 
	I0812 12:06:01.781574   66240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qyzcx5.hjwgx8k9salx56n6 \
	I0812 12:06:01.781736   66240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 12:06:01.781748   66240 cni.go:84] Creating CNI manager for "calico"
	I0812 12:06:01.783239   66240 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0812 12:06:01.785070   66240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 12:06:01.785088   66240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0812 12:06:01.812476   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 12:06:03.138332   66240 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.325817422s)
	I0812 12:06:03.138383   66240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 12:06:03.138490   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:03.138490   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-824402 minikube.k8s.io/updated_at=2024_08_12T12_06_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=calico-824402 minikube.k8s.io/primary=true
	I0812 12:06:03.246723   66240 ops.go:34] apiserver oom_adj: -16
	I0812 12:06:03.246820   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:03.746954   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:04.247540   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:01.423280   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:06:03.923067   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:06:01.599287   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:06:03.600333   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:06:04.746932   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:05.247793   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:05.746847   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:06.247883   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:06.747257   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:07.247589   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:07.746842   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:08.247564   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:08.746927   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:09.247670   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:06.424161   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:06:08.922602   65845 node_ready.go:53] node "kindnet-824402" has status "Ready":"False"
	I0812 12:06:09.924141   65845 node_ready.go:49] node "kindnet-824402" has status "Ready":"True"
	I0812 12:06:09.924166   65845 node_ready.go:38] duration metric: took 17.505346324s for node "kindnet-824402" to be "Ready" ...
	I0812 12:06:09.924175   65845 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:06:09.931732   65845 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-nv5rl" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:06.100965   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:06:08.599554   65466 pod_ready.go:102] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"False"
	I0812 12:06:10.600654   65466 pod_ready.go:92] pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.600677   65466 pod_ready.go:81] duration metric: took 39.507785067s for pod "coredns-7db6d8ff4d-b4xgv" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.600687   65466 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.606443   65466 pod_ready.go:92] pod "etcd-auto-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.606464   65466 pod_ready.go:81] duration metric: took 5.77211ms for pod "etcd-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.606474   65466 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.613074   65466 pod_ready.go:92] pod "kube-apiserver-auto-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.613101   65466 pod_ready.go:81] duration metric: took 6.620724ms for pod "kube-apiserver-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.613113   65466 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.619416   65466 pod_ready.go:92] pod "kube-controller-manager-auto-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.619441   65466 pod_ready.go:81] duration metric: took 6.319887ms for pod "kube-controller-manager-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.619450   65466 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-mkd8w" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.628971   65466 pod_ready.go:92] pod "kube-proxy-mkd8w" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.628998   65466 pod_ready.go:81] duration metric: took 9.540078ms for pod "kube-proxy-mkd8w" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.629010   65466 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.998028   65466 pod_ready.go:92] pod "kube-scheduler-auto-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.998056   65466 pod_ready.go:81] duration metric: took 369.038153ms for pod "kube-scheduler-auto-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.998066   65466 pod_ready.go:38] duration metric: took 41.919529708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:06:10.998084   65466 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:06:10.998142   65466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:06:11.015595   65466 api_server.go:72] duration metric: took 42.578254667s to wait for apiserver process to appear ...
	I0812 12:06:11.015623   65466 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:06:11.015645   65466 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0812 12:06:11.022033   65466 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0812 12:06:11.023175   65466 api_server.go:141] control plane version: v1.30.3
	I0812 12:06:11.023198   65466 api_server.go:131] duration metric: took 7.568404ms to wait for apiserver health ...
	I0812 12:06:11.023206   65466 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:06:11.199792   65466 system_pods.go:59] 7 kube-system pods found
	I0812 12:06:11.199828   65466 system_pods.go:61] "coredns-7db6d8ff4d-b4xgv" [973c2caf-341a-4b8e-b8c9-a2b6336677b9] Running
	I0812 12:06:11.199834   65466 system_pods.go:61] "etcd-auto-824402" [fa75f3b1-3b4e-41a5-8c25-552550fae487] Running
	I0812 12:06:11.199838   65466 system_pods.go:61] "kube-apiserver-auto-824402" [af5f4821-3692-4b90-9a7d-c1d54eac2cbb] Running
	I0812 12:06:11.199841   65466 system_pods.go:61] "kube-controller-manager-auto-824402" [1cd0b3b0-462c-445e-ba64-ee7e5be1752e] Running
	I0812 12:06:11.199844   65466 system_pods.go:61] "kube-proxy-mkd8w" [e00cd914-2da1-4ae2-aa93-b67233056c18] Running
	I0812 12:06:11.199848   65466 system_pods.go:61] "kube-scheduler-auto-824402" [b33fa1bb-e1e1-41bd-b6fc-14f5416ac114] Running
	I0812 12:06:11.199851   65466 system_pods.go:61] "storage-provisioner" [f6e7c2c4-fa8a-457d-a2f9-f578f8af14e5] Running
	I0812 12:06:11.199856   65466 system_pods.go:74] duration metric: took 176.645591ms to wait for pod list to return data ...
	I0812 12:06:11.199865   65466 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:06:11.396790   65466 default_sa.go:45] found service account: "default"
	I0812 12:06:11.396818   65466 default_sa.go:55] duration metric: took 196.945759ms for default service account to be created ...
	I0812 12:06:11.396830   65466 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:06:11.599564   65466 system_pods.go:86] 7 kube-system pods found
	I0812 12:06:11.599595   65466 system_pods.go:89] "coredns-7db6d8ff4d-b4xgv" [973c2caf-341a-4b8e-b8c9-a2b6336677b9] Running
	I0812 12:06:11.599600   65466 system_pods.go:89] "etcd-auto-824402" [fa75f3b1-3b4e-41a5-8c25-552550fae487] Running
	I0812 12:06:11.599605   65466 system_pods.go:89] "kube-apiserver-auto-824402" [af5f4821-3692-4b90-9a7d-c1d54eac2cbb] Running
	I0812 12:06:11.599609   65466 system_pods.go:89] "kube-controller-manager-auto-824402" [1cd0b3b0-462c-445e-ba64-ee7e5be1752e] Running
	I0812 12:06:11.599612   65466 system_pods.go:89] "kube-proxy-mkd8w" [e00cd914-2da1-4ae2-aa93-b67233056c18] Running
	I0812 12:06:11.599616   65466 system_pods.go:89] "kube-scheduler-auto-824402" [b33fa1bb-e1e1-41bd-b6fc-14f5416ac114] Running
	I0812 12:06:11.599620   65466 system_pods.go:89] "storage-provisioner" [f6e7c2c4-fa8a-457d-a2f9-f578f8af14e5] Running
	I0812 12:06:11.599626   65466 system_pods.go:126] duration metric: took 202.790166ms to wait for k8s-apps to be running ...
	I0812 12:06:11.599633   65466 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:06:11.599674   65466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:06:11.615355   65466 system_svc.go:56] duration metric: took 15.71292ms WaitForService to wait for kubelet
	I0812 12:06:11.615391   65466 kubeadm.go:582] duration metric: took 43.178053925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:06:11.615415   65466 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:06:11.796714   65466 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:06:11.796743   65466 node_conditions.go:123] node cpu capacity is 2
	I0812 12:06:11.796755   65466 node_conditions.go:105] duration metric: took 181.335756ms to run NodePressure ...
	I0812 12:06:11.796766   65466 start.go:241] waiting for startup goroutines ...
	I0812 12:06:11.796772   65466 start.go:246] waiting for cluster config update ...
	I0812 12:06:11.796787   65466 start.go:255] writing updated cluster config ...
	I0812 12:06:11.797117   65466 ssh_runner.go:195] Run: rm -f paused
	I0812 12:06:11.852710   65466 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 12:06:11.854912   65466 out.go:177] * Done! kubectl is now configured to use "auto-824402" cluster and "default" namespace by default
	I0812 12:06:10.938722   65845 pod_ready.go:92] pod "coredns-7db6d8ff4d-nv5rl" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.938748   65845 pod_ready.go:81] duration metric: took 1.00698443s for pod "coredns-7db6d8ff4d-nv5rl" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.938757   65845 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.943038   65845 pod_ready.go:92] pod "etcd-kindnet-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.943060   65845 pod_ready.go:81] duration metric: took 4.296912ms for pod "etcd-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.943071   65845 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.947728   65845 pod_ready.go:92] pod "kube-apiserver-kindnet-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.947747   65845 pod_ready.go:81] duration metric: took 4.670781ms for pod "kube-apiserver-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.947756   65845 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.951999   65845 pod_ready.go:92] pod "kube-controller-manager-kindnet-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:10.952021   65845 pod_ready.go:81] duration metric: took 4.25855ms for pod "kube-controller-manager-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:10.952033   65845 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-vszvd" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:11.123920   65845 pod_ready.go:92] pod "kube-proxy-vszvd" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:11.123946   65845 pod_ready.go:81] duration metric: took 171.905583ms for pod "kube-proxy-vszvd" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:11.123958   65845 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:11.523821   65845 pod_ready.go:92] pod "kube-scheduler-kindnet-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:06:11.523848   65845 pod_ready.go:81] duration metric: took 399.881788ms for pod "kube-scheduler-kindnet-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:06:11.523863   65845 pod_ready.go:38] duration metric: took 1.599675483s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:06:11.523879   65845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:06:11.523942   65845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:06:11.540092   65845 api_server.go:72] duration metric: took 19.462165999s to wait for apiserver process to appear ...
	I0812 12:06:11.540117   65845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:06:11.540134   65845 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0812 12:06:11.544550   65845 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0812 12:06:11.545713   65845 api_server.go:141] control plane version: v1.30.3
	I0812 12:06:11.545740   65845 api_server.go:131] duration metric: took 5.615791ms to wait for apiserver health ...
	I0812 12:06:11.545751   65845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:06:11.726411   65845 system_pods.go:59] 8 kube-system pods found
	I0812 12:06:11.726441   65845 system_pods.go:61] "coredns-7db6d8ff4d-nv5rl" [869c0138-95ac-4bc7-94cf-b3b9ee3e3bbb] Running
	I0812 12:06:11.726446   65845 system_pods.go:61] "etcd-kindnet-824402" [bdadc62f-dd4a-4aa2-834e-7e303bc6fd42] Running
	I0812 12:06:11.726450   65845 system_pods.go:61] "kindnet-7lpc5" [f03c2b81-4edb-478a-be01-7e8eddc22ec4] Running
	I0812 12:06:11.726453   65845 system_pods.go:61] "kube-apiserver-kindnet-824402" [29d585b9-cb86-4863-bef7-407c794b84e8] Running
	I0812 12:06:11.726457   65845 system_pods.go:61] "kube-controller-manager-kindnet-824402" [f232f843-0485-4fbe-a847-0f8173b43fc0] Running
	I0812 12:06:11.726460   65845 system_pods.go:61] "kube-proxy-vszvd" [296588f6-3d29-45ce-b049-b8e3316b9e13] Running
	I0812 12:06:11.726463   65845 system_pods.go:61] "kube-scheduler-kindnet-824402" [951b6c8e-e291-4b01-9a27-e2b4698a27bb] Running
	I0812 12:06:11.726466   65845 system_pods.go:61] "storage-provisioner" [b9215c7b-f320-4b3a-bf36-4f2c4e09c30f] Running
	I0812 12:06:11.726471   65845 system_pods.go:74] duration metric: took 180.715075ms to wait for pod list to return data ...
	I0812 12:06:11.726478   65845 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:06:11.923718   65845 default_sa.go:45] found service account: "default"
	I0812 12:06:11.923747   65845 default_sa.go:55] duration metric: took 197.26331ms for default service account to be created ...
	I0812 12:06:11.923756   65845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:06:12.127465   65845 system_pods.go:86] 8 kube-system pods found
	I0812 12:06:12.127505   65845 system_pods.go:89] "coredns-7db6d8ff4d-nv5rl" [869c0138-95ac-4bc7-94cf-b3b9ee3e3bbb] Running
	I0812 12:06:12.127513   65845 system_pods.go:89] "etcd-kindnet-824402" [bdadc62f-dd4a-4aa2-834e-7e303bc6fd42] Running
	I0812 12:06:12.127520   65845 system_pods.go:89] "kindnet-7lpc5" [f03c2b81-4edb-478a-be01-7e8eddc22ec4] Running
	I0812 12:06:12.127526   65845 system_pods.go:89] "kube-apiserver-kindnet-824402" [29d585b9-cb86-4863-bef7-407c794b84e8] Running
	I0812 12:06:12.127532   65845 system_pods.go:89] "kube-controller-manager-kindnet-824402" [f232f843-0485-4fbe-a847-0f8173b43fc0] Running
	I0812 12:06:12.127546   65845 system_pods.go:89] "kube-proxy-vszvd" [296588f6-3d29-45ce-b049-b8e3316b9e13] Running
	I0812 12:06:12.127552   65845 system_pods.go:89] "kube-scheduler-kindnet-824402" [951b6c8e-e291-4b01-9a27-e2b4698a27bb] Running
	I0812 12:06:12.127560   65845 system_pods.go:89] "storage-provisioner" [b9215c7b-f320-4b3a-bf36-4f2c4e09c30f] Running
	I0812 12:06:12.127570   65845 system_pods.go:126] duration metric: took 203.808362ms to wait for k8s-apps to be running ...
	I0812 12:06:12.127582   65845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:06:12.127636   65845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:06:12.145441   65845 system_svc.go:56] duration metric: took 17.850344ms WaitForService to wait for kubelet
	I0812 12:06:12.145473   65845 kubeadm.go:582] duration metric: took 20.067551957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:06:12.145492   65845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:06:12.325690   65845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:06:12.325726   65845 node_conditions.go:123] node cpu capacity is 2
	I0812 12:06:12.325739   65845 node_conditions.go:105] duration metric: took 180.242018ms to run NodePressure ...
	I0812 12:06:12.325756   65845 start.go:241] waiting for startup goroutines ...
	I0812 12:06:12.325767   65845 start.go:246] waiting for cluster config update ...
	I0812 12:06:12.325787   65845 start.go:255] writing updated cluster config ...
	I0812 12:06:12.326102   65845 ssh_runner.go:195] Run: rm -f paused
	I0812 12:06:12.382404   65845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 12:06:12.384638   65845 out.go:177] * Done! kubectl is now configured to use "kindnet-824402" cluster and "default" namespace by default
	I0812 12:06:09.746978   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:10.247416   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:10.747378   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:11.247674   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:11.747478   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:12.247696   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:12.747739   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:13.247664   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:13.747582   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:14.246882   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:14.747223   66240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:06:14.866667   66240 kubeadm.go:1113] duration metric: took 11.728234521s to wait for elevateKubeSystemPrivileges
	I0812 12:06:14.866707   66240 kubeadm.go:394] duration metric: took 24.598253974s to StartCluster
	I0812 12:06:14.866727   66240 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:06:14.866810   66240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:06:14.868957   66240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:06:14.869235   66240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 12:06:14.869247   66240 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:06:14.869340   66240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 12:06:14.869419   66240 addons.go:69] Setting storage-provisioner=true in profile "calico-824402"
	I0812 12:06:14.869436   66240 config.go:182] Loaded profile config "calico-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:06:14.869456   66240 addons.go:234] Setting addon storage-provisioner=true in "calico-824402"
	I0812 12:06:14.869454   66240 addons.go:69] Setting default-storageclass=true in profile "calico-824402"
	I0812 12:06:14.869494   66240 host.go:66] Checking if "calico-824402" exists ...
	I0812 12:06:14.869494   66240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-824402"
	I0812 12:06:14.869946   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:06:14.869971   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:06:14.869995   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:06:14.870012   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:06:14.871173   66240 out.go:177] * Verifying Kubernetes components...
	I0812 12:06:14.872722   66240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:06:14.889438   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0812 12:06:14.889458   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0812 12:06:14.890089   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:06:14.890153   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:06:14.890830   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:06:14.890838   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:06:14.890851   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:06:14.890857   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:06:14.891227   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:06:14.891286   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:06:14.891460   66240 main.go:141] libmachine: (calico-824402) Calling .GetState
	I0812 12:06:14.891900   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:06:14.891990   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:06:14.895663   66240 addons.go:234] Setting addon default-storageclass=true in "calico-824402"
	I0812 12:06:14.895709   66240 host.go:66] Checking if "calico-824402" exists ...
	I0812 12:06:14.896066   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:06:14.896117   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:06:14.912369   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0812 12:06:14.913004   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:06:14.913631   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:06:14.913658   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:06:14.914088   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:06:14.914315   66240 main.go:141] libmachine: (calico-824402) Calling .GetState
	I0812 12:06:14.916354   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:06:14.918564   66240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 12:06:14.920115   66240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:06:14.920141   66240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 12:06:14.920164   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:06:14.920534   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
	I0812 12:06:14.921195   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:06:14.921857   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:06:14.921879   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:06:14.922411   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:06:14.923043   66240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:06:14.923085   66240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:06:14.924015   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:06:14.924636   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:06:14.924658   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:06:14.925061   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:06:14.925821   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:06:14.926094   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:06:14.926278   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:06:14.944963   66240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0812 12:06:14.945468   66240 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:06:14.945990   66240 main.go:141] libmachine: Using API Version  1
	I0812 12:06:14.946018   66240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:06:14.946338   66240 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:06:14.946656   66240 main.go:141] libmachine: (calico-824402) Calling .GetState
	I0812 12:06:14.948493   66240 main.go:141] libmachine: (calico-824402) Calling .DriverName
	I0812 12:06:14.948733   66240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 12:06:14.948748   66240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 12:06:14.948763   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHHostname
	I0812 12:06:14.952054   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:06:14.952939   66240 main.go:141] libmachine: (calico-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:11:b8", ip: ""} in network mk-calico-824402: {Iface:virbr1 ExpiryTime:2024-08-12 13:05:35 +0000 UTC Type:0 Mac:52:54:00:59:11:b8 Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:calico-824402 Clientid:01:52:54:00:59:11:b8}
	I0812 12:06:14.952963   66240 main.go:141] libmachine: (calico-824402) DBG | domain calico-824402 has defined IP address 192.168.61.88 and MAC address 52:54:00:59:11:b8 in network mk-calico-824402
	I0812 12:06:14.953158   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHPort
	I0812 12:06:14.953365   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHKeyPath
	I0812 12:06:14.953520   66240 main.go:141] libmachine: (calico-824402) Calling .GetSSHUsername
	I0812 12:06:14.953696   66240 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/calico-824402/id_rsa Username:docker}
	I0812 12:06:15.105606   66240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 12:06:15.132561   66240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:06:15.390610   66240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 12:06:15.422997   66240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:06:15.925060   66240 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0812 12:06:15.926630   66240 node_ready.go:35] waiting up to 15m0s for node "calico-824402" to be "Ready" ...
	I0812 12:06:15.958713   66240 main.go:141] libmachine: Making call to close driver server
	I0812 12:06:15.958739   66240 main.go:141] libmachine: (calico-824402) Calling .Close
	I0812 12:06:15.959040   66240 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:06:15.959067   66240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:06:15.959076   66240 main.go:141] libmachine: Making call to close driver server
	I0812 12:06:15.959081   66240 main.go:141] libmachine: (calico-824402) Calling .Close
	I0812 12:06:15.959321   66240 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:06:15.959342   66240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:06:15.959324   66240 main.go:141] libmachine: (calico-824402) DBG | Closing plugin on server side
	I0812 12:06:15.969249   66240 main.go:141] libmachine: Making call to close driver server
	I0812 12:06:15.969275   66240 main.go:141] libmachine: (calico-824402) Calling .Close
	I0812 12:06:15.969773   66240 main.go:141] libmachine: (calico-824402) DBG | Closing plugin on server side
	I0812 12:06:15.969794   66240 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:06:15.969809   66240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:06:16.435075   66240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-824402" context rescaled to 1 replicas
	I0812 12:06:16.479504   66240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056472384s)
	I0812 12:06:16.479555   66240 main.go:141] libmachine: Making call to close driver server
	I0812 12:06:16.479565   66240 main.go:141] libmachine: (calico-824402) Calling .Close
	I0812 12:06:16.479961   66240 main.go:141] libmachine: (calico-824402) DBG | Closing plugin on server side
	I0812 12:06:16.479960   66240 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:06:16.479993   66240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:06:16.480003   66240 main.go:141] libmachine: Making call to close driver server
	I0812 12:06:16.480014   66240 main.go:141] libmachine: (calico-824402) Calling .Close
	I0812 12:06:16.480315   66240 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:06:16.480377   66240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:06:16.480426   66240 main.go:141] libmachine: (calico-824402) DBG | Closing plugin on server side
	I0812 12:06:16.482051   66240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 12:06:16.483490   66240 addons.go:510] duration metric: took 1.614148553s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 12:06:17.930697   66240 node_ready.go:53] node "calico-824402" has status "Ready":"False"
	
	
	==> CRI-O <==
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.476437027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7676e6cb-1557-446f-98d6-1f78fcf30f63 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.476737593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7676e6cb-1557-446f-98d6-1f78fcf30f63 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.521621216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fded1a5-cb94-4205-af8f-91a64b8ba978 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.521762853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fded1a5-cb94-4205-af8f-91a64b8ba978 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.528834069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04f8d5da-920a-4dd9-83d4-c9982c410bab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.529463968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464382529427192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04f8d5da-920a-4dd9-83d4-c9982c410bab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.530396930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95eeef6c-fcde-46ce-80a5-1fa7110bbdff name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.530478024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95eeef6c-fcde-46ce-80a5-1fa7110bbdff name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.530767498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95eeef6c-fcde-46ce-80a5-1fa7110bbdff name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.580071514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=746dd440-4e5f-4786-bb75-0a478ea46196 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.580144242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=746dd440-4e5f-4786-bb75-0a478ea46196 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.581763543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf5b4a0c-0474-4192-821e-b71adc1a535c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.582537874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464382582508595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf5b4a0c-0474-4192-821e-b71adc1a535c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.583346777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32a08b78-9940-4b3e-944a-aa5852ea721b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.583407059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32a08b78-9940-4b3e-944a-aa5852ea721b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.583627037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32a08b78-9940-4b3e-944a-aa5852ea721b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.619227901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc15b852-4b35-4a35-979d-2bc5d2f7a9fd name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.619357772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc15b852-4b35-4a35-979d-2bc5d2f7a9fd name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.621047271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=590f32bf-2cd6-462d-bcb6-6bbc715ddf67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.621694664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464382621664194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=590f32bf-2cd6-462d-bcb6-6bbc715ddf67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.622450070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e5912ea-48ad-4468-8c4a-f2faddc49fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.622516363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e5912ea-48ad-4468-8c4a-f2faddc49fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.622920124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e5912ea-48ad-4468-8c4a-f2faddc49fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.851843062Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d84a0e16-5348-4496-b25e-e7477d79f68c name=/runtime.v1.RuntimeService/Version
	Aug 12 12:06:22 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:06:22.851946721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d84a0e16-5348-4496-b25e-e7477d79f68c name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8570fb2a8fc3f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e6452c0888bf7       busybox
	72cbd6f9c7cd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   d99458c08ab37       coredns-7db6d8ff4d-86flr
	3cd0b00766504       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       1                   27a19bbbd5897       storage-provisioner
	b283882c75248       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   c2130f142c3ea       kube-proxy-h6fzz
	b4740bb15a741       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   2                   337327ca5a4bb       kube-controller-manager-default-k8s-diff-port-581883
	a8c6a879fccb9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   ae96fed1fe4ba       etcd-default-k8s-diff-port-581883
	87bb668a8df7c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            2                   32667d56a4852       kube-apiserver-default-k8s-diff-port-581883
	f182f5e4cb38c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      14 minutes ago      Exited              kube-controller-manager   1                   337327ca5a4bb       kube-controller-manager-default-k8s-diff-port-581883
	3fac62c7d9a1c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago      Running             kube-scheduler            1                   ea5be7e9df405       kube-scheduler-default-k8s-diff-port-581883
	399d65bf1849f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      14 minutes ago      Exited              kube-apiserver            1                   32667d56a4852       kube-apiserver-default-k8s-diff-port-581883
	
	
	==> coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41679 - 22938 "HINFO IN 3970124945216707387.6801956301757465445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012453193s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-581883
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-581883
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=default-k8s-diff-port-581883
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_43_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:43:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-581883
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:06:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:03:59 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:03:59 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:03:59 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:03:59 +0000   Mon, 12 Aug 2024 11:53:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    default-k8s-diff-port-581883
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4246217d28ad450d8bacd3ae2138cfc0
	  System UUID:                4246217d-28ad-450d-8bac-d3ae2138cfc0
	  Boot ID:                    4bc71395-9c86-4364-b112-0ee5bb52e581
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-86flr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-default-k8s-diff-port-581883                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-581883             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-581883    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-h6fzz                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-581883             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-wcpgl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-581883 event: Registered Node default-k8s-diff-port-581883 in Controller
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-581883 event: Registered Node default-k8s-diff-port-581883 in Controller
	
	
	==> dmesg <==
	[Aug12 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039717] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779994] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.890829] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug12 11:52] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.057041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064201] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.197045] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.125506] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.310748] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +4.202306] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +1.799417] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.072858] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.688538] kauditd_printk_skb: 59 callbacks suppressed
	[ +34.771635] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +0.115144] kauditd_printk_skb: 5 callbacks suppressed
	[Aug12 11:53] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.277082] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] <==
	{"level":"warn","ts":"2024-08-12T12:03:54.268786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.066682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-12T12:03:54.268833Z","caller":"traceutil/trace.go:171","msg":"trace[1108100782] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1139; }","duration":"171.133262ms","start":"2024-08-12T12:03:54.097689Z","end":"2024-08-12T12:03:54.268822Z","steps":["trace[1108100782] 'agreement among raft nodes before linearized reading'  (duration: 171.068988ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:03:54.269465Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:03:53.941827Z","time spent":"326.93746ms","remote":"127.0.0.1:34744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hvlrtefwdw6tjmrue5fnvamuai\" mod_revision:1131 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hvlrtefwdw6tjmrue5fnvamuai\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hvlrtefwdw6tjmrue5fnvamuai\" > >"}
	{"level":"warn","ts":"2024-08-12T12:03:54.268727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.005496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-12T12:03:54.270204Z","caller":"traceutil/trace.go:171","msg":"trace[1567708809] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1139; }","duration":"310.535603ms","start":"2024-08-12T12:03:53.959645Z","end":"2024-08-12T12:03:54.270181Z","steps":["trace[1567708809] 'agreement among raft nodes before linearized reading'  (duration: 309.000084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:03:54.270414Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:03:53.959632Z","time spent":"310.767418ms","remote":"127.0.0.1:34656","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1152,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-08-12T12:03:54.483161Z","caller":"traceutil/trace.go:171","msg":"trace[1643122589] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"208.281509ms","start":"2024-08-12T12:03:54.274863Z","end":"2024-08-12T12:03:54.483145Z","steps":["trace[1643122589] 'process raft request'  (duration: 198.974476ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:05:03.474774Z","caller":"traceutil/trace.go:171","msg":"trace[961564871] linearizableReadLoop","detail":"{readStateIndex:1378; appliedIndex:1377; }","duration":"450.355273ms","start":"2024-08-12T12:05:03.024388Z","end":"2024-08-12T12:05:03.474743Z","steps":["trace[961564871] 'read index received'  (duration: 450.22411ms)","trace[961564871] 'applied index is now lower than readState.Index'  (duration: 130.511µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:05:03.474915Z","caller":"traceutil/trace.go:171","msg":"trace[2012152665] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"585.509355ms","start":"2024-08-12T12:05:02.889398Z","end":"2024-08-12T12:05:03.474908Z","steps":["trace[2012152665] 'process raft request'  (duration: 585.244092ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.475054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:02.889379Z","time spent":"585.554616ms","remote":"127.0.0.1:34656","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1195 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-12T12:05:03.475201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"450.820903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-12T12:05:03.475511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.781385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:05:03.475608Z","caller":"traceutil/trace.go:171","msg":"trace[1596328005] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1196; }","duration":"381.882646ms","start":"2024-08-12T12:05:03.093713Z","end":"2024-08-12T12:05:03.475596Z","steps":["trace[1596328005] 'agreement among raft nodes before linearized reading'  (duration: 381.760993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.475664Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:03.093671Z","time spent":"381.984617ms","remote":"127.0.0.1:34482","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-12T12:05:03.47584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.667947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:05:03.475894Z","caller":"traceutil/trace.go:171","msg":"trace[1653121073] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1196; }","duration":"232.74916ms","start":"2024-08-12T12:05:03.243135Z","end":"2024-08-12T12:05:03.475884Z","steps":["trace[1653121073] 'agreement among raft nodes before linearized reading'  (duration: 232.677185ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.533894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:05:03.476093Z","caller":"traceutil/trace.go:171","msg":"trace[235945458] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1196; }","duration":"340.608648ms","start":"2024-08-12T12:05:03.135475Z","end":"2024-08-12T12:05:03.476084Z","steps":["trace[235945458] 'agreement among raft nodes before linearized reading'  (duration: 340.548499ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:03.135458Z","time spent":"340.677628ms","remote":"127.0.0.1:34698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true "}
	{"level":"info","ts":"2024-08-12T12:05:03.475254Z","caller":"traceutil/trace.go:171","msg":"trace[1730288780] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1196; }","duration":"450.907345ms","start":"2024-08-12T12:05:03.02434Z","end":"2024-08-12T12:05:03.475247Z","steps":["trace[1730288780] 'agreement among raft nodes before linearized reading'  (duration: 450.834158ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476388Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:03.024278Z","time spent":"452.095022ms","remote":"127.0.0.1:34982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":0,"response size":28,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-08-12T12:05:25.604787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.277517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-08-12T12:05:25.604923Z","caller":"traceutil/trace.go:171","msg":"trace[1421475193] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1214; }","duration":"129.427993ms","start":"2024-08-12T12:05:25.475483Z","end":"2024-08-12T12:05:25.604911Z","steps":["trace[1421475193] 'range keys from in-memory index tree'  (duration: 129.165878ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:05:25.749207Z","caller":"traceutil/trace.go:171","msg":"trace[1515799042] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"139.784632ms","start":"2024-08-12T12:05:25.609403Z","end":"2024-08-12T12:05:25.749187Z","steps":["trace[1515799042] 'process raft request'  (duration: 139.639053ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:50.851523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.569303ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3883669985372455496 > lease_revoke:<id:35e591466ef8c1f7>","response":"size:28"}
	
	
	==> kernel <==
	 12:06:23 up 14 min,  0 users,  load average: 0.20, 0.24, 0.15
	Linux default-k8s-diff-port-581883 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] <==
	I0812 11:52:11.315151       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0812 11:52:11.819648       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:11.822423       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0812 11:52:11.822520       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 11:52:11.824900       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 11:52:11.828386       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 11:52:11.828481       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 11:52:11.828675       1 instance.go:299] Using reconciler: lease
	W0812 11:52:11.829468       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.823054       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.823106       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.829732       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.185979       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.217583       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.400831       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.571225       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.790434       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.902027       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.374345       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.586247       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.617152       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:26.158154       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:27.533948       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:27.871114       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0812 11:52:31.829393       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] <==
	W0812 12:02:51.251654       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:02:51.251702       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:02:51.251712       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:02:51.251790       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:02:51.251885       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:02:51.253283       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:03:51.252060       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:03:51.252333       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:03:51.252366       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:03:51.253418       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:03:51.253525       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:03:51.253551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 12:05:03.475810       1 trace.go:236] Trace[1787800846]: "Update" accept:application/json, */*,audit-id:40544e1f-bb23-40e0-b08a-0052e7ccf920,client:192.168.50.114,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (12-Aug-2024 12:05:02.887) (total time: 587ms):
	Trace[1787800846]: ["GuaranteedUpdate etcd3" audit-id:40544e1f-bb23-40e0-b08a-0052e7ccf920,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 587ms (12:05:02.888)
	Trace[1787800846]:  ---"Txn call completed" 586ms (12:05:03.475)]
	Trace[1787800846]: [587.831794ms] [587.831794ms] END
	W0812 12:05:51.252580       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:05:51.252663       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:05:51.252673       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:05:51.253684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:05:51.253766       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:05:51.253774       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] <==
	I0812 12:00:37.132374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:01:06.670906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:01:07.142572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:01:36.675775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:01:37.151036       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:06.681054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:02:07.160198       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:36.688342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:02:37.168521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:06.696685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:03:07.178956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:36.702151       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:03:37.187707       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:03:56.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="273.867µs"
	E0812 12:04:06.707153       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:07.195915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:04:07.229425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="50.146µs"
	E0812 12:04:36.711952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:37.204245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:05:06.718493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:05:07.218050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:05:36.723144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:05:37.226159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:06:06.727945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:06:07.233082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] <==
	I0812 11:52:11.685115       1 serving.go:380] Generated self-signed cert in-memory
	I0812 11:52:12.154484       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 11:52:12.154521       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:52:12.156099       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 11:52:12.156203       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 11:52:12.156744       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0812 11:52:12.156814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0812 11:52:50.172622       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] <==
	I0812 11:53:03.437451       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:53:03.457354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.114"]
	I0812 11:53:03.494924       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:53:03.494971       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:53:03.494987       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:53:03.497896       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:53:03.498188       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:53:03.498607       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:53:03.502384       1 config.go:192] "Starting service config controller"
	I0812 11:53:03.503087       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:53:03.503665       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:53:03.503768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:53:03.504417       1 config.go:319] "Starting node config controller"
	I0812 11:53:03.505567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:53:03.603932       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:53:03.604089       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:53:03.605977       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] <==
	W0812 11:52:50.228555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 11:52:50.228670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 11:52:50.228866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:52:50.228950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:52:50.229139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 11:52:50.229176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 11:52:50.229380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 11:52:50.229458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 11:52:50.229637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 11:52:50.229730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 11:52:50.230000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:52:50.231377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:52:50.231655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:52:50.231751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:52:50.231978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 11:52:50.232080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 11:52:50.232185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:52:50.232265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.232385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.232463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:52:50.233761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.233846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:52:50.234078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.236383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0812 11:52:51.439422       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:04:09 default-k8s-diff-port-581883 kubelet[947]: E0812 12:04:09.229324     947 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:04:09 default-k8s-diff-port-581883 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:04:09 default-k8s-diff-port-581883 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:04:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:04:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:04:22 default-k8s-diff-port-581883 kubelet[947]: E0812 12:04:22.212466     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:04:36 default-k8s-diff-port-581883 kubelet[947]: E0812 12:04:36.211917     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:04:48 default-k8s-diff-port-581883 kubelet[947]: E0812 12:04:48.211016     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:05:02 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:02.211676     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:05:09 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:09.229775     947 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:05:09 default-k8s-diff-port-581883 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:05:09 default-k8s-diff-port-581883 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:05:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:05:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:05:14 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:14.212207     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:05:27 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:27.211994     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:05:41 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:41.212971     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:05:52 default-k8s-diff-port-581883 kubelet[947]: E0812 12:05:52.214452     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:06:05 default-k8s-diff-port-581883 kubelet[947]: E0812 12:06:05.212696     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:06:09 default-k8s-diff-port-581883 kubelet[947]: E0812 12:06:09.228501     947 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:06:09 default-k8s-diff-port-581883 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:06:09 default-k8s-diff-port-581883 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:06:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:06:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:06:17 default-k8s-diff-port-581883 kubelet[947]: E0812 12:06:17.212040     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	
	
	==> storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] <==
	I0812 11:53:05.339792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:53:05.361408       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:53:05.361587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:53:22.763758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:53:22.764646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214!
	I0812 11:53:22.764330       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb3f6e99-3d75-4ff9-a114-2b4261bc75e7", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214 became leader
	I0812 11:53:22.865253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wcpgl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl: exit status 1 (69.744443ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wcpgl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (385.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993542 -n no-preload-993542
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-12 12:04:55.852055175 +0000 UTC m=+6285.008977752
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-993542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-993542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.763µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-993542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-993542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-993542 logs -n 25: (1.579267427s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:02 UTC |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:03 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-567702             | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-567702                  | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-567702 image list                           | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p auto-824402 --memory=3072                           | auto-824402                  | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p kindnet-824402                                      | kindnet-824402               | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:04:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:04:45.108242   65845 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:04:45.108388   65845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:45.108393   65845 out.go:304] Setting ErrFile to fd 2...
	I0812 12:04:45.108398   65845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:45.108702   65845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 12:04:45.109475   65845 out.go:298] Setting JSON to false
	I0812 12:04:45.110443   65845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6426,"bootTime":1723457859,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:04:45.110517   65845 start.go:139] virtualization: kvm guest
	I0812 12:04:45.112550   65845 out.go:177] * [kindnet-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:04:45.114045   65845 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 12:04:45.114046   65845 notify.go:220] Checking for updates...
	I0812 12:04:45.116947   65845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:04:45.118508   65845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:04:45.119982   65845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:45.121389   65845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:04:45.122824   65845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:04:45.124496   65845 config.go:182] Loaded profile config "auto-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:45.124616   65845 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:45.124720   65845 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 12:04:45.124831   65845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:04:45.163376   65845 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:04:45.165024   65845 start.go:297] selected driver: kvm2
	I0812 12:04:45.165043   65845 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:04:45.165064   65845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:04:45.165794   65845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:45.165889   65845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:04:45.182265   65845 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:04:45.182348   65845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:04:45.182583   65845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:04:45.182612   65845 cni.go:84] Creating CNI manager for "kindnet"
	I0812 12:04:45.182624   65845 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 12:04:45.182684   65845 start.go:340] cluster config:
	{Name:kindnet-824402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:04:45.182795   65845 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:45.184796   65845 out.go:177] * Starting "kindnet-824402" primary control-plane node in "kindnet-824402" cluster
	I0812 12:04:40.946957   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:40.947546   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:40.947578   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:40.947498   65489 retry.go:31] will retry after 2.264463174s: waiting for machine to come up
	I0812 12:04:43.214117   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:43.214621   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:43.214642   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:43.214599   65489 retry.go:31] will retry after 2.501733192s: waiting for machine to come up
	I0812 12:04:45.186280   65845 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:04:45.186326   65845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:04:45.186337   65845 cache.go:56] Caching tarball of preloaded images
	I0812 12:04:45.186439   65845 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:04:45.186449   65845 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:04:45.186544   65845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/config.json ...
	I0812 12:04:45.186561   65845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/kindnet-824402/config.json: {Name:mkc986358a28214cea5adfbe4d5108e22b39e1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:04:45.186695   65845 start.go:360] acquireMachinesLock for kindnet-824402: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:04:45.718017   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:45.718509   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:45.718539   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:45.718463   65489 retry.go:31] will retry after 3.123042128s: waiting for machine to come up
	I0812 12:04:48.845871   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:48.846616   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:48.846646   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:48.846554   65489 retry.go:31] will retry after 4.353338569s: waiting for machine to come up
	I0812 12:04:54.902497   65845 start.go:364] duration metric: took 9.715763061s to acquireMachinesLock for "kindnet-824402"
	I0812 12:04:54.902561   65845 start.go:93] Provisioning new machine with config: &{Name:kindnet-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:04:54.902696   65845 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:04:54.905754   65845 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 12:04:54.905960   65845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:04:54.906009   65845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:04:54.923269   65845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0812 12:04:54.923679   65845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:04:54.924239   65845 main.go:141] libmachine: Using API Version  1
	I0812 12:04:54.924264   65845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:04:54.924622   65845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:04:54.924800   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetMachineName
	I0812 12:04:54.924998   65845 main.go:141] libmachine: (kindnet-824402) Calling .DriverName
	I0812 12:04:54.925203   65845 start.go:159] libmachine.API.Create for "kindnet-824402" (driver="kvm2")
	I0812 12:04:54.925232   65845 client.go:168] LocalClient.Create starting
	I0812 12:04:54.925272   65845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 12:04:54.925318   65845 main.go:141] libmachine: Decoding PEM data...
	I0812 12:04:54.925337   65845 main.go:141] libmachine: Parsing certificate...
	I0812 12:04:54.925397   65845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 12:04:54.925421   65845 main.go:141] libmachine: Decoding PEM data...
	I0812 12:04:54.925437   65845 main.go:141] libmachine: Parsing certificate...
	I0812 12:04:54.925476   65845 main.go:141] libmachine: Running pre-create checks...
	I0812 12:04:54.925488   65845 main.go:141] libmachine: (kindnet-824402) Calling .PreCreateCheck
	I0812 12:04:54.925901   65845 main.go:141] libmachine: (kindnet-824402) Calling .GetConfigRaw
	I0812 12:04:54.926382   65845 main.go:141] libmachine: Creating machine...
	I0812 12:04:54.926400   65845 main.go:141] libmachine: (kindnet-824402) Calling .Create
	I0812 12:04:54.926564   65845 main.go:141] libmachine: (kindnet-824402) Creating KVM machine...
	I0812 12:04:54.928117   65845 main.go:141] libmachine: (kindnet-824402) DBG | found existing default KVM network
	I0812 12:04:54.929862   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:54.929650   65928 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:9a:98} reservation:<nil>}
	I0812 12:04:54.930847   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:54.930770   65928 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:11:43} reservation:<nil>}
	I0812 12:04:54.931677   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:54.931593   65928 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0f:03:36} reservation:<nil>}
	I0812 12:04:54.932831   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:54.932757   65928 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002eb240}
	I0812 12:04:54.932857   65845 main.go:141] libmachine: (kindnet-824402) DBG | created network xml: 
	I0812 12:04:54.932896   65845 main.go:141] libmachine: (kindnet-824402) DBG | <network>
	I0812 12:04:54.932910   65845 main.go:141] libmachine: (kindnet-824402) DBG |   <name>mk-kindnet-824402</name>
	I0812 12:04:54.932920   65845 main.go:141] libmachine: (kindnet-824402) DBG |   <dns enable='no'/>
	I0812 12:04:54.932940   65845 main.go:141] libmachine: (kindnet-824402) DBG |   
	I0812 12:04:54.932950   65845 main.go:141] libmachine: (kindnet-824402) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0812 12:04:54.932956   65845 main.go:141] libmachine: (kindnet-824402) DBG |     <dhcp>
	I0812 12:04:54.932962   65845 main.go:141] libmachine: (kindnet-824402) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0812 12:04:54.932973   65845 main.go:141] libmachine: (kindnet-824402) DBG |     </dhcp>
	I0812 12:04:54.932982   65845 main.go:141] libmachine: (kindnet-824402) DBG |   </ip>
	I0812 12:04:54.932991   65845 main.go:141] libmachine: (kindnet-824402) DBG |   
	I0812 12:04:54.933000   65845 main.go:141] libmachine: (kindnet-824402) DBG | </network>
	I0812 12:04:54.933010   65845 main.go:141] libmachine: (kindnet-824402) DBG | 
	I0812 12:04:54.938829   65845 main.go:141] libmachine: (kindnet-824402) DBG | trying to create private KVM network mk-kindnet-824402 192.168.72.0/24...
	I0812 12:04:55.015758   65845 main.go:141] libmachine: (kindnet-824402) DBG | private KVM network mk-kindnet-824402 192.168.72.0/24 created
	I0812 12:04:55.015794   65845 main.go:141] libmachine: (kindnet-824402) DBG | I0812 12:04:55.015737   65928 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:55.015808   65845 main.go:141] libmachine: (kindnet-824402) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/kindnet-824402 ...
	I0812 12:04:55.015835   65845 main.go:141] libmachine: (kindnet-824402) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:04:55.015905   65845 main.go:141] libmachine: (kindnet-824402) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:04:53.202188   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.202821   65466 main.go:141] libmachine: (auto-824402) Found IP for machine: 192.168.39.142
	I0812 12:04:53.202839   65466 main.go:141] libmachine: (auto-824402) Reserving static IP address...
	I0812 12:04:53.202848   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has current primary IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.203206   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find host DHCP lease matching {name: "auto-824402", mac: "52:54:00:a8:95:4f", ip: "192.168.39.142"} in network mk-auto-824402
	I0812 12:04:53.285695   65466 main.go:141] libmachine: (auto-824402) DBG | Getting to WaitForSSH function...
	I0812 12:04:53.285722   65466 main.go:141] libmachine: (auto-824402) Reserved static IP address: 192.168.39.142
	I0812 12:04:53.285736   65466 main.go:141] libmachine: (auto-824402) Waiting for SSH to be available...
	I0812 12:04:53.288505   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.289019   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.289048   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.289197   65466 main.go:141] libmachine: (auto-824402) DBG | Using SSH client type: external
	I0812 12:04:53.289237   65466 main.go:141] libmachine: (auto-824402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa (-rw-------)
	I0812 12:04:53.289272   65466 main.go:141] libmachine: (auto-824402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:04:53.289286   65466 main.go:141] libmachine: (auto-824402) DBG | About to run SSH command:
	I0812 12:04:53.289314   65466 main.go:141] libmachine: (auto-824402) DBG | exit 0
	I0812 12:04:53.417175   65466 main.go:141] libmachine: (auto-824402) DBG | SSH cmd err, output: <nil>: 
	I0812 12:04:53.417504   65466 main.go:141] libmachine: (auto-824402) KVM machine creation complete!
	I0812 12:04:53.417860   65466 main.go:141] libmachine: (auto-824402) Calling .GetConfigRaw
	I0812 12:04:53.418525   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:53.418760   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:53.418966   65466 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:04:53.418982   65466 main.go:141] libmachine: (auto-824402) Calling .GetState
	I0812 12:04:53.420568   65466 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:04:53.420583   65466 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:04:53.420588   65466 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:04:53.420593   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:53.423217   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.423686   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.423702   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.423916   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:53.424103   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.424283   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.424413   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:53.424599   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:53.424809   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:53.424823   65466 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:04:53.536319   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:04:53.536345   65466 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:04:53.536353   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:53.539203   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.539699   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.539721   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.539944   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:53.540225   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.540441   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.540675   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:53.540885   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:53.541059   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:53.541071   65466 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:04:53.649657   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:04:53.649732   65466 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:04:53.649741   65466 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:04:53.649750   65466 main.go:141] libmachine: (auto-824402) Calling .GetMachineName
	I0812 12:04:53.650006   65466 buildroot.go:166] provisioning hostname "auto-824402"
	I0812 12:04:53.650033   65466 main.go:141] libmachine: (auto-824402) Calling .GetMachineName
	I0812 12:04:53.650237   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:53.652857   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.653365   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.653393   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.653571   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:53.653770   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.653914   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.654066   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:53.654217   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:53.654425   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:53.654438   65466 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-824402 && echo "auto-824402" | sudo tee /etc/hostname
	I0812 12:04:53.774383   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-824402
	
	I0812 12:04:53.774410   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:53.777783   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.778192   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.778223   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.778483   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:53.778728   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.778929   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:53.779075   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:53.779298   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:53.779488   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:53.779511   65466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-824402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-824402/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-824402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:04:53.893855   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:04:53.893886   65466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 12:04:53.893923   65466 buildroot.go:174] setting up certificates
	I0812 12:04:53.893938   65466 provision.go:84] configureAuth start
	I0812 12:04:53.893954   65466 main.go:141] libmachine: (auto-824402) Calling .GetMachineName
	I0812 12:04:53.894261   65466 main.go:141] libmachine: (auto-824402) Calling .GetIP
	I0812 12:04:53.897409   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.897830   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.897868   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.898098   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:53.902067   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.902430   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:53.902450   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:53.902625   65466 provision.go:143] copyHostCerts
	I0812 12:04:53.902684   65466 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 12:04:53.902693   65466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 12:04:53.902758   65466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 12:04:53.902853   65466 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 12:04:53.902861   65466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 12:04:53.902889   65466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 12:04:53.902953   65466 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 12:04:53.902960   65466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 12:04:53.902980   65466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 12:04:53.903038   65466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.auto-824402 san=[127.0.0.1 192.168.39.142 auto-824402 localhost minikube]
	I0812 12:04:54.216661   65466 provision.go:177] copyRemoteCerts
	I0812 12:04:54.216717   65466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:04:54.216738   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.219600   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.219951   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.219981   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.220187   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.220401   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.220610   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.220771   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:04:54.303576   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 12:04:54.327901   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0812 12:04:54.351357   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:04:54.376275   65466 provision.go:87] duration metric: took 482.323412ms to configureAuth
	I0812 12:04:54.376300   65466 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:04:54.376482   65466 config.go:182] Loaded profile config "auto-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:54.376628   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.379692   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.380136   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.380174   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.380450   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.380686   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.380906   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.381094   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.381312   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:54.381512   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:54.381537   65466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:04:54.656771   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:04:54.656799   65466 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:04:54.656809   65466 main.go:141] libmachine: (auto-824402) Calling .GetURL
	I0812 12:04:54.658558   65466 main.go:141] libmachine: (auto-824402) DBG | Using libvirt version 6000000
	I0812 12:04:54.660979   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.661420   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.661451   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.661682   65466 main.go:141] libmachine: Docker is up and running!
	I0812 12:04:54.661697   65466 main.go:141] libmachine: Reticulating splines...
	I0812 12:04:54.661704   65466 client.go:171] duration metric: took 24.211239211s to LocalClient.Create
	I0812 12:04:54.661728   65466 start.go:167] duration metric: took 24.211309069s to libmachine.API.Create "auto-824402"
	I0812 12:04:54.661741   65466 start.go:293] postStartSetup for "auto-824402" (driver="kvm2")
	I0812 12:04:54.661756   65466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:04:54.661777   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:54.662037   65466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:04:54.662060   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.665041   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.665393   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.665420   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.665613   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.665800   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.665991   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.666146   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:04:54.746926   65466 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:04:54.750939   65466 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:04:54.750961   65466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 12:04:54.751021   65466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 12:04:54.751090   65466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 12:04:54.751169   65466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:04:54.760601   65466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:04:54.783865   65466 start.go:296] duration metric: took 122.109553ms for postStartSetup
	I0812 12:04:54.783918   65466 main.go:141] libmachine: (auto-824402) Calling .GetConfigRaw
	I0812 12:04:54.784519   65466 main.go:141] libmachine: (auto-824402) Calling .GetIP
	I0812 12:04:54.787705   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.788091   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.788129   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.788498   65466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/config.json ...
	I0812 12:04:54.788754   65466 start.go:128] duration metric: took 24.358360297s to createHost
	I0812 12:04:54.788843   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.791485   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.792000   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.792029   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.792247   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.792477   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.792661   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.792827   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.793007   65466 main.go:141] libmachine: Using SSH client type: native
	I0812 12:04:54.793221   65466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0812 12:04:54.793243   65466 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:04:54.902318   65466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464294.871974947
	
	I0812 12:04:54.902339   65466 fix.go:216] guest clock: 1723464294.871974947
	I0812 12:04:54.902349   65466 fix.go:229] Guest: 2024-08-12 12:04:54.871974947 +0000 UTC Remote: 2024-08-12 12:04:54.788770018 +0000 UTC m=+24.471989412 (delta=83.204929ms)
	I0812 12:04:54.902390   65466 fix.go:200] guest clock delta is within tolerance: 83.204929ms
	I0812 12:04:54.902395   65466 start.go:83] releasing machines lock for "auto-824402", held for 24.472098285s
	I0812 12:04:54.902422   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:54.902762   65466 main.go:141] libmachine: (auto-824402) Calling .GetIP
	I0812 12:04:54.905675   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.906059   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.906084   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.906376   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:54.906974   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:54.907195   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:54.907308   65466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:04:54.907357   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.907467   65466 ssh_runner.go:195] Run: cat /version.json
	I0812 12:04:54.907495   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHHostname
	I0812 12:04:54.910271   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.910664   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.910729   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.910754   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.910926   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.911095   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.911158   65466 main.go:141] libmachine: (auto-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:95:4f", ip: ""} in network mk-auto-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:04:44 +0000 UTC Type:0 Mac:52:54:00:a8:95:4f Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:auto-824402 Clientid:01:52:54:00:a8:95:4f}
	I0812 12:04:54.911180   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined IP address 192.168.39.142 and MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:54.911263   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.911468   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHPort
	I0812 12:04:54.911461   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:04:54.911637   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHKeyPath
	I0812 12:04:54.911818   65466 main.go:141] libmachine: (auto-824402) Calling .GetSSHUsername
	I0812 12:04:54.911957   65466 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa Username:docker}
	I0812 12:04:54.990946   65466 ssh_runner.go:195] Run: systemctl --version
	I0812 12:04:55.024200   65466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:04:55.198750   65466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:04:55.205259   65466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:04:55.205346   65466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:04:55.224124   65466 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:04:55.224152   65466 start.go:495] detecting cgroup driver to use...
	I0812 12:04:55.224228   65466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:04:55.241427   65466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:04:55.256291   65466 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:04:55.256376   65466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:04:55.271868   65466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:04:55.285847   65466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> CRI-O <==
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.534071604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464296534027807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7426f6dd-d177-41b3-9b35-25270568287f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.534820687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcb704fd-1f5c-42b6-8055-7726a8b8c724 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.534895329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcb704fd-1f5c-42b6-8055-7726a8b8c724 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.535178643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcb704fd-1f5c-42b6-8055-7726a8b8c724 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.585318013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71156cfa-f0bf-4ef6-b1bf-1891fae8ab86 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.585418153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71156cfa-f0bf-4ef6-b1bf-1891fae8ab86 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.587420754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4021477-638d-41f0-806b-0ad51caf86be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.587969361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464296587930469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4021477-638d-41f0-806b-0ad51caf86be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.589018450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65d63b52-3f27-4990-b6ff-78dd7b41fc94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.589213208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65d63b52-3f27-4990-b6ff-78dd7b41fc94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.589878855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65d63b52-3f27-4990-b6ff-78dd7b41fc94 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.634634623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5417af9-4217-47fa-9846-96e8b7d8ec94 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.634718330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5417af9-4217-47fa-9846-96e8b7d8ec94 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.636419361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=091ed381-d275-44c6-9c07-07c1f0656686 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.637025216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464296636960212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=091ed381-d275-44c6-9c07-07c1f0656686 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.637671778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe831be9-d14b-445a-bf2e-5e196ba4faca name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.637777650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe831be9-d14b-445a-bf2e-5e196ba4faca name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.638040686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe831be9-d14b-445a-bf2e-5e196ba4faca name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.687724616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd0d5806-5c7f-4c35-b80c-bfe0de35f987 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.687922570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd0d5806-5c7f-4c35-b80c-bfe0de35f987 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.689783393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96d1a838-cd19-4f6d-acfd-b327ae84e40f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.690211477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464296690184029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96d1a838-cd19-4f6d-acfd-b327ae84e40f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.691323402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0562c62c-65a9-4ef0-945e-243a99155d4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.691411387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0562c62c-65a9-4ef0-945e-243a99155d4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:56 no-preload-993542 crio[730]: time="2024-08-12 12:04:56.692000763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71,PodSandboxId:7616ad30a9581357a458cd1a11073d22bb8c424223ee22c932932a5ade973735,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360222889831,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2gc2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d5375c0-6f19-40b7-98bc-50d4ef45fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964,PodSandboxId:340c257cd6ea81f5938bdb10bb192ee6c683de496ae7fedbadff86fb7eaae1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463360198153065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-shfmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6fd90de8-af9e-4b43-9fa7-b503a00e9845,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746,PodSandboxId:31ebd1fd6c11c232d784db4e2a05c0c8e85ab46b2b3e5089ea051766dadec8d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1723463359714566124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb7a321-e575-44e5-8d10-3749d1285806,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69,PodSandboxId:737754dadaa6878c0a4d4718b28b52429bf3bd5b317ee7a8abb32b9858e080c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1723463358295919562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8jwkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43501e17-fde3-4468-a170-e64a58088ec2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94,PodSandboxId:ac89ea47168b24be940abc529730ba644e1b3be10336ccd3698ee9764a4b58a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723463347422596926,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ec55711ba1c1052321c141944ffc1d,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef,PodSandboxId:a74ad84115895d85d62eff6d860093c52405e94eba7044122d62924b7ee16db4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723463347389167808,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7d5f7c83169a839579d85d6294d868,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19,PodSandboxId:a0b2f0f3531d801cf6c85ce7271abf80e884319cf79076a0c8ea694bedf102ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723463347373362261,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c900825ef33ee78a93cbc9d9fb3045,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e,PodSandboxId:bc784b19036a82b2f3db06d45942d70d6fe8c56bede3fc6de7b632f04057c85c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723463347353317220,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e,PodSandboxId:2d2ddb348e06719e7175687fe30bc4c0d5ce580cb3e45981dcb4adf468271142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723463060119516283,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-993542,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371f860ec69a456c5c6ca316a385978,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0562c62c-65a9-4ef0-945e-243a99155d4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a97afb2dea0bb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   7616ad30a9581       coredns-6f6b679f8f-2gc2z
	e819e17e634c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   340c257cd6ea8       coredns-6f6b679f8f-shfmr
	22d4668d24d8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   31ebd1fd6c11c       storage-provisioner
	2dec1828ea63c       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   15 minutes ago      Running             kube-proxy                0                   737754dadaa68       kube-proxy-8jwkz
	2cb29c2ee9470       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   15 minutes ago      Running             kube-scheduler            2                   ac89ea47168b2       kube-scheduler-no-preload-993542
	cd414c1501d0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   a74ad84115895       etcd-no-preload-993542
	b651d9db6daec       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   15 minutes ago      Running             kube-controller-manager   2                   a0b2f0f3531d8       kube-controller-manager-no-preload-993542
	31ef15b39cb8c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   15 minutes ago      Running             kube-apiserver            2                   bc784b19036a8       kube-apiserver-no-preload-993542
	33cf9f4f3371c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   20 minutes ago      Exited              kube-apiserver            1                   2d2ddb348e067       kube-apiserver-no-preload-993542
	
	
	==> coredns [a97afb2dea0bb3b76e3e58d3af919d0326f43abd6b38fabd7927df99b4259f71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e819e17e634c7a96ea18fe4ede7e232c6917308ca752baa1e22fe9b81b01b964] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-993542
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-993542
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=no-preload-993542
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-993542
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:04:39 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:04:39 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:04:39 +0000   Mon, 12 Aug 2024 11:49:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:04:39 +0000   Mon, 12 Aug 2024 11:49:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.148
	  Hostname:    no-preload-993542
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 384be5da85b84567aeaffb21db9a0f6d
	  System UUID:                384be5da-85b8-4567-aeaf-fb21db9a0f6d
	  Boot ID:                    eee01779-c9d5-4d04-b9ff-057155f1346b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-2gc2z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-6f6b679f8f-shfmr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-993542                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-993542             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-993542    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-8jwkz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-993542             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-6867b74b74-25zg8              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-993542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-993542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-993542 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node no-preload-993542 event: Registered Node no-preload-993542 in Controller
	  Normal  CIDRAssignmentFailed     15m   cidrAllocator    Node no-preload-993542 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.045444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942233] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.447595] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.463981] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.057589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053561] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.170335] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.149306] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.283613] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Aug12 11:44] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.066910] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.144836] systemd-fstab-generator[1431]: Ignoring "noauto" option for root device
	[  +2.962560] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.162305] kauditd_printk_skb: 53 callbacks suppressed
	[ +27.438139] kauditd_printk_skb: 30 callbacks suppressed
	[Aug12 11:49] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +0.063307] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.481610] systemd-fstab-generator[3416]: Ignoring "noauto" option for root device
	[  +0.080980] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.602216] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.230335] systemd-fstab-generator[3626]: Ignoring "noauto" option for root device
	[  +6.968861] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cd414c1501d0a2ea9268272b9ed45aef07b0890a067119f2b5339db90f92d1ef] <==
	{"level":"info","ts":"2024-08-12T11:49:08.135662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:08.137826Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:08.144791Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:08.138078Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf942be0a1301ad","local-member-id":"d94a8047b7882d6e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.145254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.142076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:49:08.149441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T11:49:08.149580Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:49:08.149690Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:08.151563Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.148:2379"}
	{"level":"info","ts":"2024-08-12T11:52:08.925719Z","caller":"traceutil/trace.go:171","msg":"trace[739248365] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"100.614189ms","start":"2024-08-12T11:52:08.825068Z","end":"2024-08-12T11:52:08.925682Z","steps":["trace[739248365] 'process raft request'  (duration: 100.467509ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T11:52:09.182351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.132026ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T11:52:09.182800Z","caller":"traceutil/trace.go:171","msg":"trace[1543864765] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:589; }","duration":"144.646059ms","start":"2024-08-12T11:52:09.038138Z","end":"2024-08-12T11:52:09.182784Z","steps":["trace[1543864765] 'range keys from in-memory index tree'  (duration: 144.108501ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:59:08.475334Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-12T11:59:08.485423Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":686,"took":"9.593281ms","hash":3361135427,"current-db-size-bytes":2129920,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-12T11:59:08.485494Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3361135427,"revision":686,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-12T12:02:58.725184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.495245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:02:58.725374Z","caller":"traceutil/trace.go:171","msg":"trace[1474565283] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1117; }","duration":"180.745878ms","start":"2024-08-12T12:02:58.544600Z","end":"2024-08-12T12:02:58.725346Z","steps":["trace[1474565283] 'range keys from in-memory index tree'  (duration: 180.358932ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:03:00.670670Z","caller":"traceutil/trace.go:171","msg":"trace[565930927] linearizableReadLoop","detail":"{readStateIndex:1301; appliedIndex:1300; }","duration":"127.919622ms","start":"2024-08-12T12:03:00.542720Z","end":"2024-08-12T12:03:00.670640Z","steps":["trace[565930927] 'read index received'  (duration: 127.719314ms)","trace[565930927] 'applied index is now lower than readState.Index'  (duration: 199.637µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:03:00.670825Z","caller":"traceutil/trace.go:171","msg":"trace[1142297371] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"194.431905ms","start":"2024-08-12T12:03:00.476377Z","end":"2024-08-12T12:03:00.670809Z","steps":["trace[1142297371] 'process raft request'  (duration: 194.119917ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:03:00.671056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.320415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:03:00.671103Z","caller":"traceutil/trace.go:171","msg":"trace[709378376] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1119; }","duration":"128.37387ms","start":"2024-08-12T12:03:00.542716Z","end":"2024-08-12T12:03:00.671090Z","steps":["trace[709378376] 'agreement among raft nodes before linearized reading'  (duration: 128.301019ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:04:08.488236Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2024-08-12T12:04:08.492961Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":929,"took":"3.952047ms","hash":2776617634,"current-db-size-bytes":2129920,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1486848,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-12T12:04:08.493073Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2776617634,"revision":929,"compact-revision":686}
	
	
	==> kernel <==
	 12:04:57 up 21 min,  0 users,  load average: 0.68, 0.40, 0.28
	Linux no-preload-993542 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31ef15b39cb8c8554c6cb881fe9a5f2563cdf48e576e32c65819689c66a68f1e] <==
	I0812 12:00:11.150052       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 12:00:11.150123       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:02:11.151139       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 12:02:11.151454       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0812 12:02:11.151654       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 12:02:11.151842       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0812 12:02:11.152639       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 12:02:11.153834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:04:10.149886       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 12:04:10.150117       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0812 12:04:11.152023       1 handler_proxy.go:99] no RequestInfo found in the context
	W0812 12:04:11.152060       1 handler_proxy.go:99] no RequestInfo found in the context
	E0812 12:04:11.152172       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0812 12:04:11.152240       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0812 12:04:11.153419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 12:04:11.153463       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [33cf9f4f3371c7fe0d9d3b2280aaa1489b3560e829814774f4fd82b42fbdde9e] <==
	W0812 11:49:00.062662       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.066192       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.137063       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.140615       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.155410       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.203070       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.219463       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.227262       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.238919       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.278983       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.325014       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.344054       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.350662       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.414531       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.545172       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.552806       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.554141       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.610713       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.729365       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.776447       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.799810       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:00.902940       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.410968       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.521050       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:49:04.716304       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b651d9db6daec2de76f9987eb56b439dd17620b7f9be407fd18ccea662ce8d19] <==
	E0812 11:59:47.162793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 11:59:47.731538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:00:17.170261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:00:17.742379       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:00:28.837326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="273.345µs"
	I0812 12:00:40.840280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.894µs"
	E0812 12:00:47.177022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:00:47.750969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:01:17.183611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:01:17.761772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:01:47.190208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:01:47.769363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:17.199143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:02:17.777712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:47.206410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:02:47.786228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:17.213597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:03:17.794801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:47.221319       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:03:47.805934       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:04:17.229230       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:04:17.816505       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:04:39.541337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-993542"
	E0812 12:04:47.235660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0812 12:04:47.824025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2dec1828ea63c924f28aa60f3ed0f89bf784169b81864bc9db09734b2920ab69] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0812 11:49:18.562600       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0812 11:49:18.576053       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.148"]
	E0812 11:49:18.576237       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0812 11:49:18.644020       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0812 11:49:18.644059       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:49:18.644092       1 server_linux.go:169] "Using iptables Proxier"
	I0812 11:49:18.649304       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0812 11:49:18.649663       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0812 11:49:18.649696       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:49:18.652665       1 config.go:197] "Starting service config controller"
	I0812 11:49:18.652714       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:49:18.652994       1 config.go:104] "Starting endpoint slice config controller"
	I0812 11:49:18.653050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:49:18.653793       1 config.go:326] "Starting node config controller"
	I0812 11:49:18.653819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:49:18.754855       1 shared_informer.go:320] Caches are synced for node config
	I0812 11:49:18.754864       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:49:18.754879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2cb29c2ee9470bab3324cbbc3c28453c35bc2a0ad8c0aca5a1a8119576954c94] <==
	W0812 11:49:11.076585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:49:11.076679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.200879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.201097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.252278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:49:11.252874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.264232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:49:11.264302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.286784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.286969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.395233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.395340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.415170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 11:49:11.415409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.424998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:49:11.425320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.434155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:11.434367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.480989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 11:49:11.481119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.520811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 11:49:11.520955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0812 11:49:11.734874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:11.735009       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0812 11:49:14.850284       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:03:53 no-preload-993542 kubelet[3424]: E0812 12:03:53.074025    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464233073582324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:03:53 no-preload-993542 kubelet[3424]: E0812 12:03:53.074370    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464233073582324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:03:57 no-preload-993542 kubelet[3424]: E0812 12:03:57.817589    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 12:04:03 no-preload-993542 kubelet[3424]: E0812 12:04:03.076506    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464243076028726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:03 no-preload-993542 kubelet[3424]: E0812 12:04:03.076564    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464243076028726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:09 no-preload-993542 kubelet[3424]: E0812 12:04:09.818393    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 12:04:12 no-preload-993542 kubelet[3424]: E0812 12:04:12.834722    3424 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:04:12 no-preload-993542 kubelet[3424]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:04:12 no-preload-993542 kubelet[3424]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:04:12 no-preload-993542 kubelet[3424]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:04:12 no-preload-993542 kubelet[3424]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:04:13 no-preload-993542 kubelet[3424]: E0812 12:04:13.078530    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464253078092717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:13 no-preload-993542 kubelet[3424]: E0812 12:04:13.078639    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464253078092717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:22 no-preload-993542 kubelet[3424]: E0812 12:04:22.819526    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 12:04:23 no-preload-993542 kubelet[3424]: E0812 12:04:23.080958    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464263080055580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:23 no-preload-993542 kubelet[3424]: E0812 12:04:23.081090    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464263080055580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:33 no-preload-993542 kubelet[3424]: E0812 12:04:33.083606    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464273083107550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:33 no-preload-993542 kubelet[3424]: E0812 12:04:33.084026    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464273083107550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:33 no-preload-993542 kubelet[3424]: E0812 12:04:33.817498    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 12:04:43 no-preload-993542 kubelet[3424]: E0812 12:04:43.086263    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283085697581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:43 no-preload-993542 kubelet[3424]: E0812 12:04:43.086545    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283085697581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:45 no-preload-993542 kubelet[3424]: E0812 12:04:45.818624    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	Aug 12 12:04:53 no-preload-993542 kubelet[3424]: E0812 12:04:53.088758    3424 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464293088285746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:53 no-preload-993542 kubelet[3424]: E0812 12:04:53.089109    3424 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464293088285746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 12 12:04:56 no-preload-993542 kubelet[3424]: E0812 12:04:56.821534    3424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-25zg8" podUID="70d17780-d4bc-4df4-93ac-bb74c1fa50f3"
	
	
	==> storage-provisioner [22d4668d24d8d2bf08e095f533872a62efa233a54566b9fb66b48c9199254746] <==
	I0812 11:49:19.861795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:49:19.887511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:49:19.887584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:49:19.905072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:49:19.905873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a7fdbe9-19d1-4799-88b3-8c3f9b85e5b5", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d became leader
	I0812 11:49:19.905931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d!
	I0812 11:49:20.006073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-993542_9e51be82-b188-4f69-8b4b-7025f601611d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993542 -n no-preload-993542
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-993542 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-25zg8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8: exit status 1 (76.620971ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-25zg8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-993542 describe pod metrics-server-6867b74b74-25zg8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (385.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (358.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-093615 -n embed-certs-093615
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-12 12:04:42.445182887 +0000 UTC m=+6271.602105459
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-093615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-093615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.864µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-093615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-093615 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-093615 logs -n 25: (1.128775058s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:02 UTC |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:02 UTC | 12 Aug 24 12:03 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-567702             | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-567702                  | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-567702 --memory=2200 --alsologtostderr   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:03 UTC | 12 Aug 24 12:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-567702 image list                           | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| delete  | -p newest-cni-567702                                   | newest-cni-567702            | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC | 12 Aug 24 12:04 UTC |
	| start   | -p auto-824402 --memory=3072                           | auto-824402                  | jenkins | v1.33.1 | 12 Aug 24 12:04 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:04:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:04:30.352467   65466 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:04:30.352617   65466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:30.352627   65466 out.go:304] Setting ErrFile to fd 2...
	I0812 12:04:30.352634   65466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:04:30.352852   65466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 12:04:30.353460   65466 out.go:298] Setting JSON to false
	I0812 12:04:30.354467   65466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6411,"bootTime":1723457859,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:04:30.354525   65466 start.go:139] virtualization: kvm guest
	I0812 12:04:30.356854   65466 out.go:177] * [auto-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:04:30.358323   65466 notify.go:220] Checking for updates...
	I0812 12:04:30.358367   65466 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 12:04:30.359768   65466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:04:30.361048   65466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:04:30.362231   65466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:30.363773   65466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:04:30.365148   65466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:04:30.367031   65466 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:30.367120   65466 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:04:30.367220   65466 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 12:04:30.367349   65466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:04:30.406917   65466 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:04:30.408429   65466 start.go:297] selected driver: kvm2
	I0812 12:04:30.408449   65466 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:04:30.408462   65466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:04:30.409259   65466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:30.409352   65466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:04:30.425945   65466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:04:30.426028   65466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:04:30.426311   65466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:04:30.426344   65466 cni.go:84] Creating CNI manager for ""
	I0812 12:04:30.426350   65466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 12:04:30.426356   65466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 12:04:30.426423   65466 start.go:340] cluster config:
	{Name:auto-824402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:04:30.426514   65466 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:04:30.428489   65466 out.go:177] * Starting "auto-824402" primary control-plane node in "auto-824402" cluster
	I0812 12:04:30.429885   65466 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:04:30.429933   65466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:04:30.429943   65466 cache.go:56] Caching tarball of preloaded images
	I0812 12:04:30.430022   65466 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:04:30.430032   65466 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:04:30.430119   65466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/config.json ...
	I0812 12:04:30.430135   65466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/auto-824402/config.json: {Name:mkbbe856aa2b8b2fd58977bb8a71d622d6555cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:04:30.430263   65466 start.go:360] acquireMachinesLock for auto-824402: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:04:30.430290   65466 start.go:364] duration metric: took 14.52µs to acquireMachinesLock for "auto-824402"
	I0812 12:04:30.430306   65466 start.go:93] Provisioning new machine with config: &{Name:auto-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:04:30.430380   65466 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:04:30.432278   65466 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 12:04:30.432436   65466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:04:30.432477   65466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:04:30.448406   65466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0812 12:04:30.448906   65466 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:04:30.449531   65466 main.go:141] libmachine: Using API Version  1
	I0812 12:04:30.449556   65466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:04:30.449886   65466 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:04:30.450063   65466 main.go:141] libmachine: (auto-824402) Calling .GetMachineName
	I0812 12:04:30.450214   65466 main.go:141] libmachine: (auto-824402) Calling .DriverName
	I0812 12:04:30.450422   65466 start.go:159] libmachine.API.Create for "auto-824402" (driver="kvm2")
	I0812 12:04:30.450455   65466 client.go:168] LocalClient.Create starting
	I0812 12:04:30.450496   65466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 12:04:30.450544   65466 main.go:141] libmachine: Decoding PEM data...
	I0812 12:04:30.450565   65466 main.go:141] libmachine: Parsing certificate...
	I0812 12:04:30.450611   65466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 12:04:30.450631   65466 main.go:141] libmachine: Decoding PEM data...
	I0812 12:04:30.450642   65466 main.go:141] libmachine: Parsing certificate...
	I0812 12:04:30.450655   65466 main.go:141] libmachine: Running pre-create checks...
	I0812 12:04:30.450663   65466 main.go:141] libmachine: (auto-824402) Calling .PreCreateCheck
	I0812 12:04:30.451052   65466 main.go:141] libmachine: (auto-824402) Calling .GetConfigRaw
	I0812 12:04:30.451443   65466 main.go:141] libmachine: Creating machine...
	I0812 12:04:30.451456   65466 main.go:141] libmachine: (auto-824402) Calling .Create
	I0812 12:04:30.451598   65466 main.go:141] libmachine: (auto-824402) Creating KVM machine...
	I0812 12:04:30.453020   65466 main.go:141] libmachine: (auto-824402) DBG | found existing default KVM network
	I0812 12:04:30.454613   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:30.454448   65489 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015950}
	I0812 12:04:30.454647   65466 main.go:141] libmachine: (auto-824402) DBG | created network xml: 
	I0812 12:04:30.454659   65466 main.go:141] libmachine: (auto-824402) DBG | <network>
	I0812 12:04:30.454667   65466 main.go:141] libmachine: (auto-824402) DBG |   <name>mk-auto-824402</name>
	I0812 12:04:30.454676   65466 main.go:141] libmachine: (auto-824402) DBG |   <dns enable='no'/>
	I0812 12:04:30.454686   65466 main.go:141] libmachine: (auto-824402) DBG |   
	I0812 12:04:30.454697   65466 main.go:141] libmachine: (auto-824402) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 12:04:30.454709   65466 main.go:141] libmachine: (auto-824402) DBG |     <dhcp>
	I0812 12:04:30.454720   65466 main.go:141] libmachine: (auto-824402) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 12:04:30.454729   65466 main.go:141] libmachine: (auto-824402) DBG |     </dhcp>
	I0812 12:04:30.454739   65466 main.go:141] libmachine: (auto-824402) DBG |   </ip>
	I0812 12:04:30.454747   65466 main.go:141] libmachine: (auto-824402) DBG |   
	I0812 12:04:30.454755   65466 main.go:141] libmachine: (auto-824402) DBG | </network>
	I0812 12:04:30.454762   65466 main.go:141] libmachine: (auto-824402) DBG | 
	I0812 12:04:30.460563   65466 main.go:141] libmachine: (auto-824402) DBG | trying to create private KVM network mk-auto-824402 192.168.39.0/24...
	I0812 12:04:30.538504   65466 main.go:141] libmachine: (auto-824402) DBG | private KVM network mk-auto-824402 192.168.39.0/24 created
	I0812 12:04:30.538545   65466 main.go:141] libmachine: (auto-824402) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402 ...
	I0812 12:04:30.538557   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:30.538430   65489 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:30.538578   65466 main.go:141] libmachine: (auto-824402) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:04:30.538600   65466 main.go:141] libmachine: (auto-824402) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:04:30.785247   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:30.785121   65489 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/id_rsa...
	I0812 12:04:31.010885   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:31.010749   65489 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/auto-824402.rawdisk...
	I0812 12:04:31.010911   65466 main.go:141] libmachine: (auto-824402) DBG | Writing magic tar header
	I0812 12:04:31.010947   65466 main.go:141] libmachine: (auto-824402) DBG | Writing SSH key tar header
	I0812 12:04:31.010960   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:31.010868   65489 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402 ...
	I0812 12:04:31.010974   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402
	I0812 12:04:31.010985   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 12:04:31.011003   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:04:31.011012   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 12:04:31.011024   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:04:31.011041   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:04:31.011055   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402 (perms=drwx------)
	I0812 12:04:31.011067   65466 main.go:141] libmachine: (auto-824402) DBG | Checking permissions on dir: /home
	I0812 12:04:31.011076   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:04:31.011089   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 12:04:31.011096   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 12:04:31.011108   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:04:31.011114   65466 main.go:141] libmachine: (auto-824402) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:04:31.011124   65466 main.go:141] libmachine: (auto-824402) Creating domain...
	I0812 12:04:31.011136   65466 main.go:141] libmachine: (auto-824402) DBG | Skipping /home - not owner
	I0812 12:04:31.012370   65466 main.go:141] libmachine: (auto-824402) define libvirt domain using xml: 
	I0812 12:04:31.012396   65466 main.go:141] libmachine: (auto-824402) <domain type='kvm'>
	I0812 12:04:31.012403   65466 main.go:141] libmachine: (auto-824402)   <name>auto-824402</name>
	I0812 12:04:31.012411   65466 main.go:141] libmachine: (auto-824402)   <memory unit='MiB'>3072</memory>
	I0812 12:04:31.012417   65466 main.go:141] libmachine: (auto-824402)   <vcpu>2</vcpu>
	I0812 12:04:31.012427   65466 main.go:141] libmachine: (auto-824402)   <features>
	I0812 12:04:31.012435   65466 main.go:141] libmachine: (auto-824402)     <acpi/>
	I0812 12:04:31.012442   65466 main.go:141] libmachine: (auto-824402)     <apic/>
	I0812 12:04:31.012450   65466 main.go:141] libmachine: (auto-824402)     <pae/>
	I0812 12:04:31.012471   65466 main.go:141] libmachine: (auto-824402)     
	I0812 12:04:31.012481   65466 main.go:141] libmachine: (auto-824402)   </features>
	I0812 12:04:31.012485   65466 main.go:141] libmachine: (auto-824402)   <cpu mode='host-passthrough'>
	I0812 12:04:31.012490   65466 main.go:141] libmachine: (auto-824402)   
	I0812 12:04:31.012498   65466 main.go:141] libmachine: (auto-824402)   </cpu>
	I0812 12:04:31.012529   65466 main.go:141] libmachine: (auto-824402)   <os>
	I0812 12:04:31.012553   65466 main.go:141] libmachine: (auto-824402)     <type>hvm</type>
	I0812 12:04:31.012564   65466 main.go:141] libmachine: (auto-824402)     <boot dev='cdrom'/>
	I0812 12:04:31.012573   65466 main.go:141] libmachine: (auto-824402)     <boot dev='hd'/>
	I0812 12:04:31.012586   65466 main.go:141] libmachine: (auto-824402)     <bootmenu enable='no'/>
	I0812 12:04:31.012599   65466 main.go:141] libmachine: (auto-824402)   </os>
	I0812 12:04:31.012620   65466 main.go:141] libmachine: (auto-824402)   <devices>
	I0812 12:04:31.012639   65466 main.go:141] libmachine: (auto-824402)     <disk type='file' device='cdrom'>
	I0812 12:04:31.012656   65466 main.go:141] libmachine: (auto-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/boot2docker.iso'/>
	I0812 12:04:31.012666   65466 main.go:141] libmachine: (auto-824402)       <target dev='hdc' bus='scsi'/>
	I0812 12:04:31.012673   65466 main.go:141] libmachine: (auto-824402)       <readonly/>
	I0812 12:04:31.012680   65466 main.go:141] libmachine: (auto-824402)     </disk>
	I0812 12:04:31.012686   65466 main.go:141] libmachine: (auto-824402)     <disk type='file' device='disk'>
	I0812 12:04:31.012694   65466 main.go:141] libmachine: (auto-824402)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:04:31.012702   65466 main.go:141] libmachine: (auto-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/auto-824402/auto-824402.rawdisk'/>
	I0812 12:04:31.012709   65466 main.go:141] libmachine: (auto-824402)       <target dev='hda' bus='virtio'/>
	I0812 12:04:31.012714   65466 main.go:141] libmachine: (auto-824402)     </disk>
	I0812 12:04:31.012721   65466 main.go:141] libmachine: (auto-824402)     <interface type='network'>
	I0812 12:04:31.012728   65466 main.go:141] libmachine: (auto-824402)       <source network='mk-auto-824402'/>
	I0812 12:04:31.012734   65466 main.go:141] libmachine: (auto-824402)       <model type='virtio'/>
	I0812 12:04:31.012739   65466 main.go:141] libmachine: (auto-824402)     </interface>
	I0812 12:04:31.012746   65466 main.go:141] libmachine: (auto-824402)     <interface type='network'>
	I0812 12:04:31.012752   65466 main.go:141] libmachine: (auto-824402)       <source network='default'/>
	I0812 12:04:31.012767   65466 main.go:141] libmachine: (auto-824402)       <model type='virtio'/>
	I0812 12:04:31.012772   65466 main.go:141] libmachine: (auto-824402)     </interface>
	I0812 12:04:31.012781   65466 main.go:141] libmachine: (auto-824402)     <serial type='pty'>
	I0812 12:04:31.012795   65466 main.go:141] libmachine: (auto-824402)       <target port='0'/>
	I0812 12:04:31.012807   65466 main.go:141] libmachine: (auto-824402)     </serial>
	I0812 12:04:31.012825   65466 main.go:141] libmachine: (auto-824402)     <console type='pty'>
	I0812 12:04:31.012833   65466 main.go:141] libmachine: (auto-824402)       <target type='serial' port='0'/>
	I0812 12:04:31.012850   65466 main.go:141] libmachine: (auto-824402)     </console>
	I0812 12:04:31.012861   65466 main.go:141] libmachine: (auto-824402)     <rng model='virtio'>
	I0812 12:04:31.012883   65466 main.go:141] libmachine: (auto-824402)       <backend model='random'>/dev/random</backend>
	I0812 12:04:31.012897   65466 main.go:141] libmachine: (auto-824402)     </rng>
	I0812 12:04:31.012905   65466 main.go:141] libmachine: (auto-824402)     
	I0812 12:04:31.012917   65466 main.go:141] libmachine: (auto-824402)     
	I0812 12:04:31.012926   65466 main.go:141] libmachine: (auto-824402)   </devices>
	I0812 12:04:31.012936   65466 main.go:141] libmachine: (auto-824402) </domain>
	I0812 12:04:31.012993   65466 main.go:141] libmachine: (auto-824402) 
	I0812 12:04:31.017549   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:ae:83:cf in network default
	I0812 12:04:31.018139   65466 main.go:141] libmachine: (auto-824402) Ensuring networks are active...
	I0812 12:04:31.018189   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:31.019035   65466 main.go:141] libmachine: (auto-824402) Ensuring network default is active
	I0812 12:04:31.019360   65466 main.go:141] libmachine: (auto-824402) Ensuring network mk-auto-824402 is active
	I0812 12:04:31.019931   65466 main.go:141] libmachine: (auto-824402) Getting domain xml...
	I0812 12:04:31.020651   65466 main.go:141] libmachine: (auto-824402) Creating domain...
	I0812 12:04:32.312500   65466 main.go:141] libmachine: (auto-824402) Waiting to get IP...
	I0812 12:04:32.313300   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:32.313768   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:32.313790   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:32.313745   65489 retry.go:31] will retry after 226.604441ms: waiting for machine to come up
	I0812 12:04:32.542250   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:32.542854   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:32.542877   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:32.542815   65489 retry.go:31] will retry after 245.428386ms: waiting for machine to come up
	I0812 12:04:32.790307   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:32.790768   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:32.790791   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:32.790716   65489 retry.go:31] will retry after 341.698543ms: waiting for machine to come up
	I0812 12:04:33.134409   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:33.134943   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:33.134975   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:33.134915   65489 retry.go:31] will retry after 393.427629ms: waiting for machine to come up
	I0812 12:04:33.529556   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:33.530087   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:33.530114   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:33.530019   65489 retry.go:31] will retry after 555.879157ms: waiting for machine to come up
	I0812 12:04:34.087754   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:34.088513   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:34.088546   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:34.088452   65489 retry.go:31] will retry after 717.916542ms: waiting for machine to come up
	I0812 12:04:34.808535   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:34.809222   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:34.809268   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:34.809174   65489 retry.go:31] will retry after 1.074144739s: waiting for machine to come up
	I0812 12:04:35.884565   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:35.885124   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:35.885158   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:35.885079   65489 retry.go:31] will retry after 981.130895ms: waiting for machine to come up
	I0812 12:04:36.868182   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:36.868655   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:36.868685   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:36.868595   65489 retry.go:31] will retry after 1.78057453s: waiting for machine to come up
	I0812 12:04:38.650893   65466 main.go:141] libmachine: (auto-824402) DBG | domain auto-824402 has defined MAC address 52:54:00:a8:95:4f in network mk-auto-824402
	I0812 12:04:38.651456   65466 main.go:141] libmachine: (auto-824402) DBG | unable to find current IP address of domain auto-824402 in network mk-auto-824402
	I0812 12:04:38.651488   65466 main.go:141] libmachine: (auto-824402) DBG | I0812 12:04:38.651398   65489 retry.go:31] will retry after 2.294225107s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.010959327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283010932518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=215691a0-b8cc-4e45-9690-528df7b7f014 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.011560980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f9a58a8-68f8-44a1-9366-d99b0f85c3f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.011616108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f9a58a8-68f8-44a1-9366-d99b0f85c3f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.011992793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f9a58a8-68f8-44a1-9366-d99b0f85c3f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.047795998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b13815ef-23a9-466a-8ad7-ba43197d5bbf name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.047884992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b13815ef-23a9-466a-8ad7-ba43197d5bbf name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.049252491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f11cac27-d91f-48c7-9326-a8b925a227f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.049909501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283049881483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f11cac27-d91f-48c7-9326-a8b925a227f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.050633934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca8d8c8f-324e-4425-8ef5-9951542a5d77 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.050684102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca8d8c8f-324e-4425-8ef5-9951542a5d77 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.050870325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca8d8c8f-324e-4425-8ef5-9951542a5d77 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.088215745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9de103f8-b57b-42eb-8fa2-d70966b98b88 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.088480967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9de103f8-b57b-42eb-8fa2-d70966b98b88 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.089787842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38843663-cfed-4a26-9cca-c233493abec1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.090293866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283090263021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38843663-cfed-4a26-9cca-c233493abec1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.090860182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5605550a-7d6d-486d-a0bc-d75fb8878dfd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.090926771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5605550a-7d6d-486d-a0bc-d75fb8878dfd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.091219376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5605550a-7d6d-486d-a0bc-d75fb8878dfd name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.124515968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fe3d10c-3246-4839-9a52-9726a79920b5 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.124597706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fe3d10c-3246-4839-9a52-9726a79920b5 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.125926274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6decac0-0a9c-4ccf-82ae-bebb203730e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.126481033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464283126452937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6decac0-0a9c-4ccf-82ae-bebb203730e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.127080809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae0a507a-d522-4995-bcb1-b80b7bc20b87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.127176686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae0a507a-d522-4995-bcb1-b80b7bc20b87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:04:43 embed-certs-093615 crio[725]: time="2024-08-12 12:04:43.127369488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011,PodSandboxId:3e24d404dc9fd67e7dc0075d8a44221509cc6bc7aaee318e92ea25893a2107ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379827269460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cjbwn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8ff679-9b23-481d-b8c5-207b54e7e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: 519a27d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89,PodSandboxId:8e58e817dfe1e5cdc5e13a376cfecd1aeb54b5814acde5cd157ba435ca8019fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463379769400818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zcpcc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ed76b19c-cd96-4754-ae07-08a2a0b91387,},Annotations:map[string]string{io.kubernetes.container.hash: 6c68a0ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98,PodSandboxId:c0db6336dcd60921546f5a41061dbf93a850639b46e902d2dd7ea25c4c70ef95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723463379082406043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29d9422-fc62-4536-974b-70ba940152c2,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe9edba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779,PodSandboxId:b15dac4a46926cd9bad0c1ea2ccfd9427583a535d0968f8e3dc84266d3fa9f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1723463378095761475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26xvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacdea2f-2ce2-43ab-8e3e-104a7a40d027,},Annotations:map[string]string{io.kubernetes.container.hash: 7a63889f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c,PodSandboxId:3f91dcb6e01091555ec8783d6bab2461b58a5cc6a9f757533e791eaaad8a7172,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463358309576920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c0b8f401b3620d72c88cbd19916771,},Annotations:map[string]string{io.kubernetes.container.hash: 5e923daa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f,PodSandboxId:971dd05803062f4bc3cc06f9e54759d8c764ba84b9b346b7e5b9721c9d699fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463358257992078,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6d8a130ae502a7aa2808cecf135d4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a,PodSandboxId:d480a7755d15143c6279e01df8d4086d31f85406469fc39726964d71abbcdf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463358289118814,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2b6ca60428c5e7af527adc730f5d01,},Annotations:map[string]string{io.kubernetes.container.hash: 95d470e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e,PodSandboxId:3c0c4462fd4eb5b3c67c2f21f5ffb934784a27cad4df0093aa9797218e95b9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463358213167912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-093615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4228075c00a9a0feb75301a73092757d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae0a507a-d522-4995-bcb1-b80b7bc20b87 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddd1e160b6318       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3e24d404dc9fd       coredns-7db6d8ff4d-cjbwn
	d4d2283db2642       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   8e58e817dfe1e       coredns-7db6d8ff4d-zcpcc
	9b995fc3e6be3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   c0db6336dcd60       storage-provisioner
	116de1fd0f81f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   b15dac4a46926       kube-proxy-26xvl
	5c50823884ae4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   3f91dcb6e0109       etcd-embed-certs-093615
	81a99ad0a2faa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   d480a7755d151       kube-apiserver-embed-certs-093615
	3de12c4bbaca1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   971dd05803062       kube-scheduler-embed-certs-093615
	c719e81534f0e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   3c0c4462fd4eb       kube-controller-manager-embed-certs-093615
	
	
	==> coredns [d4d2283db264218f130f764a3ab1c27d647657ab590b20d813df063c9f8f2c89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ddd1e160b6318f08b006f93ac9bdd5283d33cdafa0156a2827ab62323b0ed011] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-093615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-093615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=embed-certs-093615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:49:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-093615
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:04:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 11:59:54 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 11:59:54 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 11:59:54 +0000   Mon, 12 Aug 2024 11:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 11:59:54 +0000   Mon, 12 Aug 2024 11:49:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.191
	  Hostname:    embed-certs-093615
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f7733c219f4141bc1cc7d55f20a08a
	  System UUID:                10f7733c-219f-4141-bc1c-c7d55f20a08a
	  Boot ID:                    52319191-26f0-4bd5-85ad-e38640b2e855
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cjbwn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-zcpcc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-093615                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-093615             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-093615    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-26xvl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-093615             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-kwk6t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-093615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-093615 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-093615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-093615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-093615 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-093615 event: Registered Node embed-certs-093615 in Controller
	
	
	==> dmesg <==
	[  +0.055959] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.027249] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.144775] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.627689] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.547381] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.067931] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067174] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.169367] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.150980] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.289404] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.568962] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.070215] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.076799] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +4.658866] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.716386] kauditd_printk_skb: 79 callbacks suppressed
	[Aug12 11:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.635053] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +6.060865] systemd-fstab-generator[3896]: Ignoring "noauto" option for root device
	[  +0.072476] kauditd_printk_skb: 57 callbacks suppressed
	[ +14.321167] systemd-fstab-generator[4094]: Ignoring "noauto" option for root device
	[  +0.116619] kauditd_printk_skb: 12 callbacks suppressed
	[Aug12 11:50] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5c50823884ae41af6cbe94544af5706985546f1b0e41dc59574bb16dfcb71d9c] <==
	{"level":"info","ts":"2024-08-12T11:49:19.44919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e became candidate at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e received MsgVoteResp from 457fa619cab3a8e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457fa619cab3a8e became leader at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.449222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457fa619cab3a8e elected leader 457fa619cab3a8e at term 2"}
	{"level":"info","ts":"2024-08-12T11:49:19.453301Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"457fa619cab3a8e","local-member-attributes":"{Name:embed-certs-093615 ClientURLs:[https://192.168.72.191:2379]}","request-path":"/0/members/457fa619cab3a8e/attributes","cluster-id":"13882d9d804521e5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T11:49:19.453436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:19.453805Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.453965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T11:49:19.462171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:19.462763Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T11:49:19.462291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.191:2379"}
	{"level":"info","ts":"2024-08-12T11:49:19.464243Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"13882d9d804521e5","local-member-id":"457fa619cab3a8e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.464405Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.464447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T11:49:19.48048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T11:52:10.148388Z","caller":"traceutil/trace.go:171","msg":"trace[693313145] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"126.624487ms","start":"2024-08-12T11:52:10.021717Z","end":"2024-08-12T11:52:10.148341Z","steps":["trace[693313145] 'process raft request'  (duration: 126.440189ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T11:59:19.561588Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-08-12T11:59:19.572302Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":712,"took":"9.688036ms","hash":4180242281,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-12T11:59:19.572351Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4180242281,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2024-08-12T12:03:53.136327Z","caller":"traceutil/trace.go:171","msg":"trace[189656341] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"253.285171ms","start":"2024-08-12T12:03:52.882982Z","end":"2024-08-12T12:03:53.136267Z","steps":["trace[189656341] 'process raft request'  (duration: 156.405605ms)","trace[189656341] 'compare'  (duration: 96.721795ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-12T12:03:54.162476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.204603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:03:54.162774Z","caller":"traceutil/trace.go:171","msg":"trace[258315013] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1178; }","duration":"190.628521ms","start":"2024-08-12T12:03:53.972121Z","end":"2024-08-12T12:03:54.162749Z","steps":["trace[258315013] 'range keys from in-memory index tree'  (duration: 190.131695ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:04:19.569273Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":954}
	{"level":"info","ts":"2024-08-12T12:04:19.57565Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":954,"took":"5.454186ms","hash":2422206275,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-12T12:04:19.575773Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2422206275,"revision":954,"compact-revision":712}
	
	
	==> kernel <==
	 12:04:43 up 20 min,  0 users,  load average: 0.69, 0.35, 0.25
	Linux embed-certs-093615 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [81a99ad0a2faa742909ec94c2078f7075a9986f0655c9d860d3e4b92c5b1223a] <==
	I0812 11:59:21.983373       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:00:21.982490       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:00:21.982597       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:00:21.982605       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:00:21.983607       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:00:21.983700       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:00:21.983729       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:02:21.983419       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:02:21.983731       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:02:21.983760       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:02:21.983869       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:02:21.983904       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:02:21.985157       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:04:20.985928       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:04:20.986124       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0812 12:04:21.986273       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:04:21.986443       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:04:21.986487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:04:21.986361       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:04:21.986650       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:04:21.987891       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c719e81534f0ece7830b9712a865b739f53d90fc6379062adb5ffc60065dd36e] <==
	I0812 11:59:08.146382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 11:59:37.670431       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 11:59:38.154086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:00:07.676408       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:00:08.161811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:00:37.683546       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:00:38.170398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:00:41.490444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="238.168µs"
	I0812 12:00:53.489077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="505.318µs"
	E0812 12:01:07.688891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:01:08.178769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:01:37.694287       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:01:38.185855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:07.700202       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:02:08.194588       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:02:37.705530       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:02:38.202487       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:07.712227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:03:08.213698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:03:37.718290       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:03:38.224066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:04:07.724213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:08.231702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:04:37.730423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:38.242854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [116de1fd0f81fcc9a61ddacd12b81674c9a887197a3aebaa4ae3a6ddfc637779] <==
	I0812 11:49:38.480795       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:49:38.496197       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.191"]
	I0812 11:49:38.632332       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:49:38.632416       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:49:38.632434       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:49:38.635272       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:49:38.635637       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:49:38.635667       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:49:38.637117       1 config.go:192] "Starting service config controller"
	I0812 11:49:38.637226       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:49:38.637269       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:49:38.637302       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:49:38.642847       1 config.go:319] "Starting node config controller"
	I0812 11:49:38.642887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:49:38.737474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:49:38.737551       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:49:38.743682       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3de12c4bbaca1a20ac2b011874af396a6391b160b46e59d40a394ec25cf9516f] <==
	E0812 11:49:21.006567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:21.006724       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:21.006752       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 11:49:21.006795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.006818       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.006848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:49:21.842224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 11:49:21.842273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 11:49:21.919155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:21.919223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.131771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:22.131886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.145738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:49:22.145892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:49:22.273284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:49:22.273391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:49:22.300314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:49:22.300899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:49:22.327790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 11:49:22.327915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 11:49:22.330554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:49:22.330666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 11:49:22.440276       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 11:49:22.440377       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 11:49:24.395710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:02:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:02:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:02:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:02:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:02:28 embed-certs-093615 kubelet[3903]: E0812 12:02:28.471606    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:02:42 embed-certs-093615 kubelet[3903]: E0812 12:02:42.470794    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:02:53 embed-certs-093615 kubelet[3903]: E0812 12:02:53.471738    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:03:08 embed-certs-093615 kubelet[3903]: E0812 12:03:08.471482    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:03:21 embed-certs-093615 kubelet[3903]: E0812 12:03:21.471986    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:03:23 embed-certs-093615 kubelet[3903]: E0812 12:03:23.496426    3903 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:03:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:03:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:03:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:03:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:03:35 embed-certs-093615 kubelet[3903]: E0812 12:03:35.474263    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:03:50 embed-certs-093615 kubelet[3903]: E0812 12:03:50.470780    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:04:05 embed-certs-093615 kubelet[3903]: E0812 12:04:05.471743    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:04:17 embed-certs-093615 kubelet[3903]: E0812 12:04:17.471486    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:04:23 embed-certs-093615 kubelet[3903]: E0812 12:04:23.494724    3903 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:04:23 embed-certs-093615 kubelet[3903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:04:23 embed-certs-093615 kubelet[3903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:04:23 embed-certs-093615 kubelet[3903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:04:23 embed-certs-093615 kubelet[3903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:04:28 embed-certs-093615 kubelet[3903]: E0812 12:04:28.471648    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	Aug 12 12:04:40 embed-certs-093615 kubelet[3903]: E0812 12:04:40.475497    3903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kwk6t" podUID="5817f68c-ab3e-4b50-acf1-8d56d25dcbcd"
	
	
	==> storage-provisioner [9b995fc3e6be3942acbde64819cc76f96f3521923b35c9ae8fbec13f40206e98] <==
	I0812 11:49:39.171735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:49:39.191105       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:49:39.191229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:49:39.208649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:49:39.210099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fcba3c9-31aa-44e8-bdf8-fdb149899bc1", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c became leader
	I0812 11:49:39.210202       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c!
	I0812 11:49:39.311319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-093615_dd3c78b0-c18c-46fc-85c1-b42b6876d95c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-093615 -n embed-certs-093615
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-093615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-kwk6t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t: exit status 1 (60.968745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-kwk6t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-093615 describe pod metrics-server-569cc877fc-kwk6t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (358.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
E0812 12:00:45.935576   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.17:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.17:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (250.152573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-835962" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-835962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-835962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.016µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-835962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (219.061189ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-835962 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-693259                                        | pause-693259                 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-002803                              | cert-expiration-002803       | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	| delete  | -p                                                     | disable-driver-mounts-101845 | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:34 UTC |
	|         | disable-driver-mounts-101845                           |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:34 UTC | 12 Aug 24 11:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-093615            | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC | 12 Aug 24 11:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-993542             | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC | 12 Aug 24 11:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-835962        | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:37 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-093615                 | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-093615                                  | embed-certs-093615           | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:38 UTC | 12 Aug 24 11:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-835962             | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-835962                              | old-k8s-version-835962       | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-535697                           | kubernetes-upgrade-535697    | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:44 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-993542                  | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-993542                                   | no-preload-993542            | jenkins | v1.33.1 | 12 Aug 24 11:39 UTC | 12 Aug 24 11:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-581883  | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC | 12 Aug 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:44 UTC |                     |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581883       | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581883 | jenkins | v1.33.1 | 12 Aug 24 11:46 UTC | 12 Aug 24 11:57 UTC |
	|         | default-k8s-diff-port-581883                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:46:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:46:59.013199   59908 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:46:59.013476   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013486   59908 out.go:304] Setting ErrFile to fd 2...
	I0812 11:46:59.013490   59908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:46:59.013689   59908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:46:59.014204   59908 out.go:298] Setting JSON to false
	I0812 11:46:59.015302   59908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5360,"bootTime":1723457859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:46:59.015368   59908 start.go:139] virtualization: kvm guest
	I0812 11:46:59.017512   59908 out.go:177] * [default-k8s-diff-port-581883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:46:59.018833   59908 notify.go:220] Checking for updates...
	I0812 11:46:59.018859   59908 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:46:59.020251   59908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:46:59.021646   59908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:46:59.022806   59908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:46:59.024110   59908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:46:59.025476   59908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:46:59.027290   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:46:59.027911   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.028000   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.042960   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0812 11:46:59.043506   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.044010   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.044038   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.044357   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.044528   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.044791   59908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:46:59.045201   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.045244   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.060824   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
	I0812 11:46:59.061268   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.061747   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.061775   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.062156   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.062346   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.101403   59908 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 11:46:59.102677   59908 start.go:297] selected driver: kvm2
	I0812 11:46:59.102698   59908 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.102863   59908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:46:59.103621   59908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.103690   59908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:46:59.119409   59908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:46:59.119785   59908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:46:59.119848   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:46:59.119862   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:46:59.119900   59908 start.go:340] cluster config:
	{Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:46:59.120006   59908 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:46:59.121814   59908 out.go:177] * Starting "default-k8s-diff-port-581883" primary control-plane node in "default-k8s-diff-port-581883" cluster
	I0812 11:46:59.123067   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:46:59.123111   59908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:46:59.123124   59908 cache.go:56] Caching tarball of preloaded images
	I0812 11:46:59.123213   59908 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:46:59.123228   59908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:46:59.123315   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:46:59.123508   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:46:59.123549   59908 start.go:364] duration metric: took 23.58µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:46:59.123562   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:46:59.123569   59908 fix.go:54] fixHost starting: 
	I0812 11:46:59.123822   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:46:59.123852   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:46:59.138741   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0812 11:46:59.139136   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:46:59.139611   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:46:59.139638   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:46:59.139938   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:46:59.140109   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.140220   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:46:59.141738   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Running err=<nil>
	W0812 11:46:59.141754   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:46:59.143728   59908 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-581883" VM ...
	I0812 11:46:54.633587   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:54.653858   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:54.653945   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:54.693961   57198 cri.go:89] found id: ""
	I0812 11:46:54.693985   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.693992   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:54.693997   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:54.694045   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:54.728922   57198 cri.go:89] found id: ""
	I0812 11:46:54.728951   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.728963   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:54.728970   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:54.729034   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:54.764203   57198 cri.go:89] found id: ""
	I0812 11:46:54.764235   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.764246   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:54.764253   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:54.764316   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:54.805321   57198 cri.go:89] found id: ""
	I0812 11:46:54.805352   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.805363   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:54.805370   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:54.805437   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:54.844243   57198 cri.go:89] found id: ""
	I0812 11:46:54.844273   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.844281   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:54.844287   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:54.844345   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:54.883145   57198 cri.go:89] found id: ""
	I0812 11:46:54.883181   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.883192   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:54.883200   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:54.883263   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:54.921119   57198 cri.go:89] found id: ""
	I0812 11:46:54.921150   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.921160   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:54.921168   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:54.921230   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:54.955911   57198 cri.go:89] found id: ""
	I0812 11:46:54.955941   57198 logs.go:276] 0 containers: []
	W0812 11:46:54.955949   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:54.955958   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:54.955969   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:55.006069   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:55.006108   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:55.020600   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:55.020637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:55.094897   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:55.094917   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:55.094932   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:55.173601   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:55.173642   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:57.711917   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:46:57.726261   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:46:57.726340   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:46:57.762810   57198 cri.go:89] found id: ""
	I0812 11:46:57.762834   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.762845   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:46:57.762853   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:46:57.762919   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:46:57.796596   57198 cri.go:89] found id: ""
	I0812 11:46:57.796638   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.796649   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:46:57.796657   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:46:57.796719   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:46:57.829568   57198 cri.go:89] found id: ""
	I0812 11:46:57.829600   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.829607   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:46:57.829612   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:46:57.829659   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:46:57.861229   57198 cri.go:89] found id: ""
	I0812 11:46:57.861260   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.861271   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:46:57.861278   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:46:57.861339   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:46:57.892274   57198 cri.go:89] found id: ""
	I0812 11:46:57.892302   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.892312   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:46:57.892320   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:46:57.892384   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:46:57.924635   57198 cri.go:89] found id: ""
	I0812 11:46:57.924662   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.924670   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:46:57.924675   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:46:57.924723   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:46:57.961539   57198 cri.go:89] found id: ""
	I0812 11:46:57.961584   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.961592   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:46:57.961598   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:46:57.961656   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:46:57.994115   57198 cri.go:89] found id: ""
	I0812 11:46:57.994148   57198 logs.go:276] 0 containers: []
	W0812 11:46:57.994160   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:46:57.994170   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:46:57.994182   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:46:58.067608   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:46:58.067648   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:46:58.105003   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:46:58.105036   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:46:58.156152   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:46:58.156186   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:46:58.169380   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:46:58.169409   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:46:58.236991   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:46:56.296673   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:58.297248   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.796584   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:00.182029   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:02.182240   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:46:59.144895   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:46:59.144926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:46:59.145161   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:46:59.147827   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148278   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:43:32 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:46:59.148305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:46:59.148451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:46:59.148645   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148820   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:46:59.148953   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:46:59.149111   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:46:59.149331   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:46:59.149345   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:47:02.045251   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:00.737522   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:00.750916   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:00.750991   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:00.782713   57198 cri.go:89] found id: ""
	I0812 11:47:00.782734   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.782742   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:00.782747   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:00.782793   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:00.816552   57198 cri.go:89] found id: ""
	I0812 11:47:00.816576   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.816584   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:00.816590   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:00.816639   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:00.850761   57198 cri.go:89] found id: ""
	I0812 11:47:00.850784   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.850794   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:00.850801   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:00.850864   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:00.888099   57198 cri.go:89] found id: ""
	I0812 11:47:00.888138   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.888146   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:00.888152   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:00.888210   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:00.926073   57198 cri.go:89] found id: ""
	I0812 11:47:00.926103   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.926113   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:00.926120   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:00.926187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:00.963404   57198 cri.go:89] found id: ""
	I0812 11:47:00.963434   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.963442   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:00.963447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:00.963508   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:00.998331   57198 cri.go:89] found id: ""
	I0812 11:47:00.998366   57198 logs.go:276] 0 containers: []
	W0812 11:47:00.998376   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:00.998385   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:00.998448   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:01.042696   57198 cri.go:89] found id: ""
	I0812 11:47:01.042729   57198 logs.go:276] 0 containers: []
	W0812 11:47:01.042738   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:01.042750   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:01.042764   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:01.134880   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:01.134918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:01.171185   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:01.171223   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:01.222565   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:01.222608   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:01.236042   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:01.236076   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:01.309342   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:03.810121   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:03.822945   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:03.823023   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:03.856316   57198 cri.go:89] found id: ""
	I0812 11:47:03.856342   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.856353   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:03.856361   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:03.856428   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:03.894579   57198 cri.go:89] found id: ""
	I0812 11:47:03.894610   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.894622   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:03.894630   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:03.894680   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:03.929306   57198 cri.go:89] found id: ""
	I0812 11:47:03.929334   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.929352   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:03.929359   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:03.929419   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:03.970739   57198 cri.go:89] found id: ""
	I0812 11:47:03.970774   57198 logs.go:276] 0 containers: []
	W0812 11:47:03.970786   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:03.970794   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:03.970872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:04.004583   57198 cri.go:89] found id: ""
	I0812 11:47:04.004610   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.004619   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:04.004630   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:04.004681   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:04.039259   57198 cri.go:89] found id: ""
	I0812 11:47:04.039288   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.039298   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:04.039304   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:04.039372   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:04.072490   57198 cri.go:89] found id: ""
	I0812 11:47:04.072522   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.072532   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:04.072547   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:04.072602   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:04.105648   57198 cri.go:89] found id: ""
	I0812 11:47:04.105677   57198 logs.go:276] 0 containers: []
	W0812 11:47:04.105686   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:04.105694   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:04.105705   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:04.181854   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:04.181880   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:04.181894   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:04.258499   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:04.258541   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:03.294934   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.295154   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:04.183393   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:06.682752   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:05.121108   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:04.296893   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:04.296918   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:04.347475   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:04.347514   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:06.862382   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:06.876230   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:06.876314   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:06.919447   57198 cri.go:89] found id: ""
	I0812 11:47:06.919487   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.919499   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:06.919508   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:06.919581   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:06.954000   57198 cri.go:89] found id: ""
	I0812 11:47:06.954035   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.954046   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:06.954055   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:06.954124   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:06.988225   57198 cri.go:89] found id: ""
	I0812 11:47:06.988256   57198 logs.go:276] 0 containers: []
	W0812 11:47:06.988266   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:06.988274   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:06.988347   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:07.024425   57198 cri.go:89] found id: ""
	I0812 11:47:07.024452   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.024464   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:07.024471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:07.024536   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:07.059758   57198 cri.go:89] found id: ""
	I0812 11:47:07.059785   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.059793   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:07.059800   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:07.059859   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:07.093540   57198 cri.go:89] found id: ""
	I0812 11:47:07.093570   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.093580   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:07.093587   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:07.093649   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:07.126880   57198 cri.go:89] found id: ""
	I0812 11:47:07.126910   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.126920   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:07.126929   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:07.126984   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:07.159930   57198 cri.go:89] found id: ""
	I0812 11:47:07.159959   57198 logs.go:276] 0 containers: []
	W0812 11:47:07.159970   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:07.159980   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:07.159995   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:07.214022   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:07.214063   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:07.227009   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:07.227037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:07.297583   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:07.297609   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:07.297629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:07.377229   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:07.377281   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:07.296302   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.296695   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:09.182760   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.682727   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:11.197110   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:09.914683   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:09.927943   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:09.928014   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:09.961729   57198 cri.go:89] found id: ""
	I0812 11:47:09.961757   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.961768   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:09.961775   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:09.961835   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:09.998895   57198 cri.go:89] found id: ""
	I0812 11:47:09.998923   57198 logs.go:276] 0 containers: []
	W0812 11:47:09.998931   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:09.998936   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:09.998989   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:10.036414   57198 cri.go:89] found id: ""
	I0812 11:47:10.036447   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.036457   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:10.036465   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:10.036519   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:10.073783   57198 cri.go:89] found id: ""
	I0812 11:47:10.073811   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.073818   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:10.073824   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:10.073872   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:10.110532   57198 cri.go:89] found id: ""
	I0812 11:47:10.110566   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.110577   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:10.110584   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:10.110643   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:10.143728   57198 cri.go:89] found id: ""
	I0812 11:47:10.143768   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.143782   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:10.143791   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:10.143875   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:10.176706   57198 cri.go:89] found id: ""
	I0812 11:47:10.176740   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.176749   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:10.176754   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:10.176803   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:10.210409   57198 cri.go:89] found id: ""
	I0812 11:47:10.210439   57198 logs.go:276] 0 containers: []
	W0812 11:47:10.210449   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:10.210460   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:10.210474   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:10.261338   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:10.261378   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:10.274313   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:10.274346   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:10.341830   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:10.341865   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:10.341881   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:10.417654   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:10.417699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:12.954982   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:12.967755   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:12.967841   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:13.001425   57198 cri.go:89] found id: ""
	I0812 11:47:13.001452   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.001462   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:13.001470   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:13.001528   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:13.036527   57198 cri.go:89] found id: ""
	I0812 11:47:13.036561   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.036572   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:13.036579   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:13.036640   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:13.073271   57198 cri.go:89] found id: ""
	I0812 11:47:13.073301   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.073314   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:13.073323   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:13.073380   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:13.107512   57198 cri.go:89] found id: ""
	I0812 11:47:13.107543   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.107551   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:13.107557   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:13.107614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:13.141938   57198 cri.go:89] found id: ""
	I0812 11:47:13.141972   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.141984   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:13.141991   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:13.142051   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:13.176628   57198 cri.go:89] found id: ""
	I0812 11:47:13.176660   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.176672   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:13.176679   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:13.176739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:13.211620   57198 cri.go:89] found id: ""
	I0812 11:47:13.211649   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.211660   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:13.211667   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:13.211732   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:13.243877   57198 cri.go:89] found id: ""
	I0812 11:47:13.243902   57198 logs.go:276] 0 containers: []
	W0812 11:47:13.243909   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:13.243917   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:13.243928   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:13.297684   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:13.297718   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:13.311287   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:13.311318   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:13.376488   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:13.376516   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:13.376531   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:13.457745   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:13.457786   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:11.795381   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:13.795932   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.183038   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:16.183071   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:14.273141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:15.993556   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:16.006169   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:16.006249   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:16.040541   57198 cri.go:89] found id: ""
	I0812 11:47:16.040569   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.040578   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:16.040583   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:16.040633   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:16.073886   57198 cri.go:89] found id: ""
	I0812 11:47:16.073913   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.073924   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:16.073931   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:16.073993   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:16.107299   57198 cri.go:89] found id: ""
	I0812 11:47:16.107356   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.107369   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:16.107376   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:16.107431   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:16.142168   57198 cri.go:89] found id: ""
	I0812 11:47:16.142200   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.142209   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:16.142215   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:16.142262   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:16.175398   57198 cri.go:89] found id: ""
	I0812 11:47:16.175429   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.175440   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:16.175447   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:16.175509   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.210518   57198 cri.go:89] found id: ""
	I0812 11:47:16.210543   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.210551   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:16.210558   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:16.210614   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:16.244570   57198 cri.go:89] found id: ""
	I0812 11:47:16.244602   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.244611   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:16.244617   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:16.244683   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:16.278722   57198 cri.go:89] found id: ""
	I0812 11:47:16.278748   57198 logs.go:276] 0 containers: []
	W0812 11:47:16.278756   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:16.278765   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:16.278777   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:16.322973   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:16.323010   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:16.374888   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:16.374936   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:16.388797   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:16.388827   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:16.462710   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:16.462731   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:16.462742   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.046529   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:19.061016   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:19.061083   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:19.098199   57198 cri.go:89] found id: ""
	I0812 11:47:19.098226   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.098238   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:19.098246   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:19.098307   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:19.131177   57198 cri.go:89] found id: ""
	I0812 11:47:19.131207   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.131215   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:19.131222   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:19.131281   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:19.164497   57198 cri.go:89] found id: ""
	I0812 11:47:19.164528   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.164539   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:19.164546   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:19.164619   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:19.200447   57198 cri.go:89] found id: ""
	I0812 11:47:19.200477   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.200485   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:19.200490   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:19.200553   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:19.235004   57198 cri.go:89] found id: ""
	I0812 11:47:19.235039   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.235051   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:19.235058   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:19.235114   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:16.297007   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.796402   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:18.186341   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.682850   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:22.683087   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:20.349117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:23.421182   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:19.269669   57198 cri.go:89] found id: ""
	I0812 11:47:19.269700   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.269711   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:19.269719   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:19.269786   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:19.305486   57198 cri.go:89] found id: ""
	I0812 11:47:19.305515   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.305527   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:19.305533   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:19.305610   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:19.340701   57198 cri.go:89] found id: ""
	I0812 11:47:19.340730   57198 logs.go:276] 0 containers: []
	W0812 11:47:19.340737   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:19.340745   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:19.340757   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:19.391595   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:19.391637   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:19.405702   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:19.405730   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:19.476972   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:19.477002   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:19.477017   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:19.560001   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:19.560037   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.100167   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:22.112650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:22.112712   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:22.145625   57198 cri.go:89] found id: ""
	I0812 11:47:22.145651   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.145659   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:22.145665   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:22.145722   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:22.181353   57198 cri.go:89] found id: ""
	I0812 11:47:22.181388   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.181400   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:22.181407   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:22.181465   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:22.213563   57198 cri.go:89] found id: ""
	I0812 11:47:22.213592   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.213603   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:22.213610   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:22.213669   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:22.247589   57198 cri.go:89] found id: ""
	I0812 11:47:22.247614   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.247629   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:22.247635   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:22.247682   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:22.279102   57198 cri.go:89] found id: ""
	I0812 11:47:22.279126   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.279134   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:22.279139   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:22.279187   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:22.316174   57198 cri.go:89] found id: ""
	I0812 11:47:22.316204   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.316215   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:22.316222   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:22.316289   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:22.351875   57198 cri.go:89] found id: ""
	I0812 11:47:22.351904   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.351915   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:22.351920   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:22.351976   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:22.384224   57198 cri.go:89] found id: ""
	I0812 11:47:22.384260   57198 logs.go:276] 0 containers: []
	W0812 11:47:22.384273   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:22.384283   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:22.384297   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:22.423032   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:22.423058   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:22.474127   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:22.474165   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:22.487638   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:22.487672   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:22.556554   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:22.556590   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:22.556607   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:21.295000   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:23.295712   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.296884   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.183687   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:27.683615   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:25.138357   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:25.152354   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:47:25.152438   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:47:25.187059   57198 cri.go:89] found id: ""
	I0812 11:47:25.187085   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.187095   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:47:25.187104   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:47:25.187164   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:47:25.220817   57198 cri.go:89] found id: ""
	I0812 11:47:25.220840   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.220848   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:47:25.220853   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:47:25.220911   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:47:25.256308   57198 cri.go:89] found id: ""
	I0812 11:47:25.256334   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.256342   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:47:25.256347   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:47:25.256394   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:47:25.290211   57198 cri.go:89] found id: ""
	I0812 11:47:25.290245   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.290254   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:47:25.290263   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:47:25.290334   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:47:25.324612   57198 cri.go:89] found id: ""
	I0812 11:47:25.324644   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.324651   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:47:25.324657   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:47:25.324708   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:47:25.362160   57198 cri.go:89] found id: ""
	I0812 11:47:25.362189   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.362200   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:47:25.362208   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:47:25.362271   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:47:25.396434   57198 cri.go:89] found id: ""
	I0812 11:47:25.396458   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.396466   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:47:25.396471   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:47:25.396531   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:47:25.429708   57198 cri.go:89] found id: ""
	I0812 11:47:25.429738   57198 logs.go:276] 0 containers: []
	W0812 11:47:25.429750   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:47:25.429761   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:47:25.429775   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:47:25.443553   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:47:25.443588   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:47:25.515643   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:47:25.515684   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:47:25.515699   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:47:25.596323   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:47:25.596365   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:47:25.632444   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:47:25.632482   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:47:28.182092   57198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:47:28.195568   57198 kubeadm.go:597] duration metric: took 4m2.144668431s to restartPrimaryControlPlane
	W0812 11:47:28.195647   57198 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:47:28.195678   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:47:29.194896   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:47:29.210273   57198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:47:29.220401   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:47:29.230765   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:47:29.230783   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:47:29.230825   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:47:29.240322   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:47:29.240392   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:47:29.251511   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:47:29.261616   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:47:29.261675   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:47:27.795828   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.796889   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:29.683959   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.183115   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:32.541112   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:29.273431   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.284262   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:47:29.284331   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:47:29.295811   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:47:29.306613   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:47:29.306685   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:47:29.317986   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:47:29.566668   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:47:32.295992   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.795262   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:34.183370   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:36.682661   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:35.613159   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:36.796467   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.295851   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:39.182790   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.183829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:41.693116   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:41.795257   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.795510   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.795595   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:43.681967   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:45.684043   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:44.765178   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:48.296050   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.796799   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:48.181748   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.182360   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:52.682975   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:50.845098   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.917138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:47:53.299038   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.796462   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:55.183044   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:57.685262   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:58.295509   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.795668   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:00.182427   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:02.682842   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:47:59.997094   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.069083   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:03.296463   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.795306   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:05.182884   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.682408   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:07.796147   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.296184   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:10.182124   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:12.182757   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:09.149157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.221135   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:12.296827   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.796551   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:14.682524   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:16.682657   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.301111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:17.295545   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:19.295850   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:18.688121   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.182277   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:21.373181   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:21.297142   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.798497   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:23.182636   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:25.682702   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.682936   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:27.453111   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:26.295505   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:28.296105   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.796925   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:29.688759   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:32.182416   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:30.525184   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:33.295379   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:35.296605   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:34.183273   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.682829   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:36.605187   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:37.796023   57616 pod_ready.go:102] pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:38.789570   57616 pod_ready.go:81] duration metric: took 4m0.000355544s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:38.789615   57616 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s52v2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:38.789648   57616 pod_ready.go:38] duration metric: took 4m11.040926567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:38.789687   57616 kubeadm.go:597] duration metric: took 4m21.131138259s to restartPrimaryControlPlane
	W0812 11:48:38.789757   57616 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:38.789794   57616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:38.683163   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:40.683334   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:39.677106   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:43.182845   56845 pod_ready.go:102] pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace has status "Ready":"False"
	I0812 11:48:44.677001   56845 pod_ready.go:81] duration metric: took 4m0.0007218s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" ...
	E0812 11:48:44.677024   56845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8856c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0812 11:48:44.677041   56845 pod_ready.go:38] duration metric: took 4m12.037310023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:48:44.677065   56845 kubeadm.go:597] duration metric: took 4m19.591323336s to restartPrimaryControlPlane
	W0812 11:48:44.677114   56845 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0812 11:48:44.677137   56845 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:48:45.757157   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:48.829146   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:54.909142   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:48:57.981079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:04.870417   57616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.080589185s)
	I0812 11:49:04.870490   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:04.897963   57616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:04.912211   57616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:04.933833   57616 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:04.933861   57616 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:04.933915   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:04.946673   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:04.946756   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:04.960851   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:04.989181   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:04.989259   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:05.002989   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.012600   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:05.012673   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:05.022301   57616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:05.031680   57616 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:05.031761   57616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:05.041453   57616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:05.087039   57616 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0812 11:49:05.087106   57616 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:05.195646   57616 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:05.195788   57616 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:05.195909   57616 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0812 11:49:05.204565   57616 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:05.207373   57616 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:05.207481   57616 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:05.207573   57616 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:05.207696   57616 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:05.207792   57616 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:05.207896   57616 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:05.207995   57616 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:05.208103   57616 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:05.208195   57616 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:05.208296   57616 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:05.208401   57616 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:05.208456   57616 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:05.208531   57616 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:05.368644   57616 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:05.523403   57616 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:05.656177   57616 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:05.786141   57616 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:05.945607   57616 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:05.946201   57616 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:05.948940   57616 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:05.950857   57616 out.go:204]   - Booting up control plane ...
	I0812 11:49:05.950970   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:05.951060   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:05.952093   57616 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:05.971023   57616 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:05.978207   57616 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:05.978421   57616 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:06.109216   57616 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:06.109362   57616 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0812 11:49:04.061117   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.133143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:07.110595   57616 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001459707s
	I0812 11:49:07.110732   57616 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:12.112776   57616 kubeadm.go:310] [api-check] The API server is healthy after 5.002008667s
	I0812 11:49:12.126637   57616 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:12.141115   57616 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:12.166337   57616 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:12.166727   57616 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-993542 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:12.180548   57616 kubeadm.go:310] [bootstrap-token] Using token: jiwh9x.y6rsv6xjvwdwkbct
	I0812 11:49:12.182174   57616 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:12.182276   57616 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:12.191053   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:12.203294   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:12.208858   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:12.215501   57616 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:12.227747   57616 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:12.520136   57616 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:12.964503   57616 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:13.523969   57616 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:13.524831   57616 kubeadm.go:310] 
	I0812 11:49:13.524954   57616 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:13.524973   57616 kubeadm.go:310] 
	I0812 11:49:13.525098   57616 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:13.525113   57616 kubeadm.go:310] 
	I0812 11:49:13.525147   57616 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:13.525220   57616 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:13.525311   57616 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:13.525325   57616 kubeadm.go:310] 
	I0812 11:49:13.525411   57616 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:13.525420   57616 kubeadm.go:310] 
	I0812 11:49:13.525489   57616 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:13.525503   57616 kubeadm.go:310] 
	I0812 11:49:13.525572   57616 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:13.525690   57616 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:13.525780   57616 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:13.525790   57616 kubeadm.go:310] 
	I0812 11:49:13.525905   57616 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:13.526000   57616 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:13.526011   57616 kubeadm.go:310] 
	I0812 11:49:13.526119   57616 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526271   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:13.526307   57616 kubeadm.go:310] 	--control-plane 
	I0812 11:49:13.526317   57616 kubeadm.go:310] 
	I0812 11:49:13.526420   57616 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:13.526429   57616 kubeadm.go:310] 
	I0812 11:49:13.526527   57616 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jiwh9x.y6rsv6xjvwdwkbct \
	I0812 11:49:13.526653   57616 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:13.527630   57616 kubeadm.go:310] W0812 11:49:05.056260    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528000   57616 kubeadm.go:310] W0812 11:49:05.058135    3066 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0812 11:49:13.528149   57616 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:13.528175   57616 cni.go:84] Creating CNI manager for ""
	I0812 11:49:13.528189   57616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:13.529938   57616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:13.213137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:13.531443   57616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:13.542933   57616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:13.562053   57616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:13.562181   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:13.562196   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-993542 minikube.k8s.io/updated_at=2024_08_12T11_49_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=no-preload-993542 minikube.k8s.io/primary=true
	I0812 11:49:13.764006   57616 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:13.764145   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.264728   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:14.764225   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.264599   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.764919   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:15.943701   56845 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.266539018s)
	I0812 11:49:15.943778   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:15.959746   56845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:49:15.970630   56845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:15.980712   56845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:15.980729   56845 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:15.980775   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:15.990070   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:15.990133   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:15.999602   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:16.008767   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:16.008855   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:16.019564   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.028585   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:16.028660   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:16.037916   56845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:16.047028   56845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:16.047087   56845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:16.056780   56845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:16.104764   56845 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:49:16.104848   56845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:16.239085   56845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:16.239218   56845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:16.239309   56845 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:16.456581   56845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:16.458619   56845 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:16.458731   56845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:16.458805   56845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:16.458927   56845 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:16.459037   56845 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:16.459121   56845 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:16.459191   56845 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:16.459281   56845 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:16.459385   56845 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:16.459469   56845 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:16.459569   56845 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:16.459643   56845 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:16.459734   56845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:16.579477   56845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:16.765880   56845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:49:16.885469   56845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:16.955885   56845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:17.091576   56845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:17.092005   56845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:17.094454   56845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:17.096720   56845 out.go:204]   - Booting up control plane ...
	I0812 11:49:17.096850   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:17.096976   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:17.098357   56845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:17.115656   56845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:17.116069   56845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:17.116128   56845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:17.256475   56845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:49:17.256550   56845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:49:17.758741   56845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.271569ms
	I0812 11:49:17.758818   56845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:49:16.264606   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:16.764905   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.264989   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:17.765205   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.265008   57616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:18.380060   57616 kubeadm.go:1113] duration metric: took 4.817945872s to wait for elevateKubeSystemPrivileges
	I0812 11:49:18.380107   57616 kubeadm.go:394] duration metric: took 5m0.782175026s to StartCluster
	I0812 11:49:18.380131   57616 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.380237   57616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:18.382942   57616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:18.383329   57616 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:18.383406   57616 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:18.383564   57616 addons.go:69] Setting storage-provisioner=true in profile "no-preload-993542"
	I0812 11:49:18.383573   57616 addons.go:69] Setting default-storageclass=true in profile "no-preload-993542"
	I0812 11:49:18.383603   57616 addons.go:234] Setting addon storage-provisioner=true in "no-preload-993542"
	W0812 11:49:18.383618   57616 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:18.383620   57616 config.go:182] Loaded profile config "no-preload-993542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 11:49:18.383634   57616 addons.go:69] Setting metrics-server=true in profile "no-preload-993542"
	I0812 11:49:18.383653   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.383621   57616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-993542"
	I0812 11:49:18.383662   57616 addons.go:234] Setting addon metrics-server=true in "no-preload-993542"
	W0812 11:49:18.383674   57616 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:18.383708   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.384042   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384072   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384089   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384117   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.384181   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.384211   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.386531   57616 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:18.388412   57616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:18.404269   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0812 11:49:18.404302   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0812 11:49:18.404279   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0812 11:49:18.405011   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405062   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405012   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.405601   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405603   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405621   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405636   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.405743   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.405769   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.406150   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406174   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406184   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.406762   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.406786   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.407101   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.407395   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.407420   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.411782   57616 addons.go:234] Setting addon default-storageclass=true in "no-preload-993542"
	W0812 11:49:18.411813   57616 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:18.411843   57616 host.go:66] Checking if "no-preload-993542" exists ...
	I0812 11:49:18.412202   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.412241   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.428999   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0812 11:49:18.429469   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430064   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.430087   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.430147   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0812 11:49:18.430442   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.430500   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.430762   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.431525   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.431539   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.431950   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.432152   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.432474   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0812 11:49:18.432876   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.433599   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.433618   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.433872   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434119   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.434381   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.434819   57616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:18.434875   57616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:18.436590   57616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:18.436703   57616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:16.285160   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:18.438442   57616 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.438466   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:18.438489   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.438698   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:18.438713   57616 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:18.438731   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.443927   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.443965   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444276   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444315   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444373   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.444614   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.444790   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.444824   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.444851   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445055   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.445427   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.445624   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.445776   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.445938   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.457462   57616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 11:49:18.457995   57616 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:18.458573   57616 main.go:141] libmachine: Using API Version  1
	I0812 11:49:18.458602   57616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:18.459048   57616 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:18.459315   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetState
	I0812 11:49:18.461486   57616 main.go:141] libmachine: (no-preload-993542) Calling .DriverName
	I0812 11:49:18.461753   57616 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.461770   57616 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:18.461788   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHHostname
	I0812 11:49:18.465243   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465776   57616 main.go:141] libmachine: (no-preload-993542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:11:b5", ip: ""} in network mk-no-preload-993542: {Iface:virbr1 ExpiryTime:2024-08-12 12:43:50 +0000 UTC Type:0 Mac:52:54:00:bc:11:b5 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:no-preload-993542 Clientid:01:52:54:00:bc:11:b5}
	I0812 11:49:18.465803   57616 main.go:141] libmachine: (no-preload-993542) DBG | domain no-preload-993542 has defined IP address 192.168.61.148 and MAC address 52:54:00:bc:11:b5 in network mk-no-preload-993542
	I0812 11:49:18.465981   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHPort
	I0812 11:49:18.466172   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHKeyPath
	I0812 11:49:18.466325   57616 main.go:141] libmachine: (no-preload-993542) Calling .GetSSHUsername
	I0812 11:49:18.466478   57616 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/no-preload-993542/id_rsa Username:docker}
	I0812 11:49:18.649285   57616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:18.666240   57616 node_ready.go:35] waiting up to 6m0s for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675741   57616 node_ready.go:49] node "no-preload-993542" has status "Ready":"True"
	I0812 11:49:18.675769   57616 node_ready.go:38] duration metric: took 9.489483ms for node "no-preload-993542" to be "Ready" ...
	I0812 11:49:18.675781   57616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:18.687934   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:18.762652   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:18.769504   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:18.769533   57616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:18.801182   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:18.815215   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:18.815249   57616 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:18.869830   57616 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:18.869856   57616 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:18.943609   57616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:19.326108   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326145   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326183   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326200   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326517   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326543   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326558   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326571   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.326577   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326580   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.326586   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326588   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.326597   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326598   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.326969   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.326997   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327005   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.327232   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.327247   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.349315   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.349341   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.349693   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.349737   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.349746   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.620732   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.620765   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621097   57616 main.go:141] libmachine: (no-preload-993542) DBG | Closing plugin on server side
	I0812 11:49:19.621143   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621160   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621170   57616 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:19.621182   57616 main.go:141] libmachine: (no-preload-993542) Calling .Close
	I0812 11:49:19.621446   57616 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:19.621469   57616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:19.621481   57616 addons.go:475] Verifying addon metrics-server=true in "no-preload-993542"
	I0812 11:49:19.624757   57616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:19.626510   57616 addons.go:510] duration metric: took 1.243102289s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:20.695552   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:22.762626   56845 kubeadm.go:310] [api-check] The API server is healthy after 5.002108915s
	I0812 11:49:22.782365   56845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:49:22.794869   56845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:49:22.829058   56845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:49:22.829314   56845 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-093615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:49:22.842722   56845 kubeadm.go:310] [bootstrap-token] Using token: e42mo3.61s6ofjvy51u5vh7
	I0812 11:49:22.844590   56845 out.go:204]   - Configuring RBAC rules ...
	I0812 11:49:22.844745   56845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:49:22.851804   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:49:22.861419   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:49:22.866597   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:49:22.870810   56845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:49:22.886117   56845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:49:22.365060   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:23.168156   56845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:49:23.612002   56845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:49:24.170270   56845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:49:24.171014   56845 kubeadm.go:310] 
	I0812 11:49:24.171076   56845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:49:24.171084   56845 kubeadm.go:310] 
	I0812 11:49:24.171146   56845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:49:24.171153   56845 kubeadm.go:310] 
	I0812 11:49:24.171204   56845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:49:24.171801   56845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:49:24.171846   56845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:49:24.171853   56845 kubeadm.go:310] 
	I0812 11:49:24.171954   56845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:49:24.171975   56845 kubeadm.go:310] 
	I0812 11:49:24.172039   56845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:49:24.172051   56845 kubeadm.go:310] 
	I0812 11:49:24.172125   56845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:49:24.172247   56845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:49:24.172360   56845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:49:24.172378   56845 kubeadm.go:310] 
	I0812 11:49:24.172498   56845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:49:24.172601   56845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:49:24.172611   56845 kubeadm.go:310] 
	I0812 11:49:24.172772   56845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.172908   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc \
	I0812 11:49:24.172944   56845 kubeadm.go:310] 	--control-plane 
	I0812 11:49:24.172953   56845 kubeadm.go:310] 
	I0812 11:49:24.173063   56845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:49:24.173073   56845 kubeadm.go:310] 
	I0812 11:49:24.173209   56845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e42mo3.61s6ofjvy51u5vh7 \
	I0812 11:49:24.173363   56845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:357ff5d440bfe57210ab0f733a2631cb2582e8abe72ab85a1cb79f788b88edcc 
	I0812 11:49:24.173919   56845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:49:24.173990   56845 cni.go:84] Creating CNI manager for ""
	I0812 11:49:24.174008   56845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:49:24.176549   56845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:49:25.662550   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:49:25.662668   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:49:25.664487   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:25.664563   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:25.664640   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:25.664729   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:25.664809   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:25.664949   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:25.666793   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:25.666861   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:25.666925   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:25.667017   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:25.667091   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:25.667181   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:25.667232   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:25.667306   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:25.667359   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:25.667437   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:25.667536   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:25.667592   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:25.667680   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:25.667754   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:25.667839   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:25.667950   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:25.668040   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:25.668189   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:25.668289   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:25.668333   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:25.668400   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:22.696279   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.194695   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:25.695175   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:25.695199   57616 pod_ready.go:81] duration metric: took 7.007233179s for pod "coredns-6f6b679f8f-2gc2z" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.695209   57616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:25.670765   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:25.670861   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:25.670939   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:25.671039   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:25.671150   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:25.671295   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:25.671379   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:49:25.671476   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671647   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671705   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.671862   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.671919   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672079   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672136   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672288   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672347   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:49:25.672558   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:49:25.672576   57198 kubeadm.go:310] 
	I0812 11:49:25.672636   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:49:25.672686   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:49:25.672701   57198 kubeadm.go:310] 
	I0812 11:49:25.672757   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:49:25.672811   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:49:25.672932   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:49:25.672941   57198 kubeadm.go:310] 
	I0812 11:49:25.673048   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:49:25.673091   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:49:25.673133   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:49:25.673141   57198 kubeadm.go:310] 
	I0812 11:49:25.673242   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:49:25.673343   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:49:25.673353   57198 kubeadm.go:310] 
	I0812 11:49:25.673513   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:49:25.673593   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:49:25.673660   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:49:25.673724   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:49:25.673768   57198 kubeadm.go:310] 
	W0812 11:49:25.673837   57198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 11:49:25.673882   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 11:49:26.145437   57198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:26.160316   57198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:49:26.169638   57198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:49:26.169664   57198 kubeadm.go:157] found existing configuration files:
	
	I0812 11:49:26.169711   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:49:26.179210   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:49:26.179278   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:49:26.189165   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:49:26.198952   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:49:26.199019   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:49:26.208905   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.217947   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:49:26.218003   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:49:26.227048   57198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:49:26.235890   57198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:49:26.235946   57198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:49:26.245085   57198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:49:26.313657   57198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 11:49:26.313809   57198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:49:26.463967   57198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:49:26.464098   57198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:49:26.464204   57198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:49:26.650503   57198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:49:26.652540   57198 out.go:204]   - Generating certificates and keys ...
	I0812 11:49:26.652631   57198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:49:26.652686   57198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:49:26.652751   57198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 11:49:26.652803   57198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 11:49:26.652913   57198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 11:49:26.652983   57198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 11:49:26.653052   57198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 11:49:26.653157   57198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 11:49:26.653299   57198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 11:49:26.653430   57198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 11:49:26.653489   57198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 11:49:26.653569   57198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:49:26.881003   57198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:49:26.962055   57198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:49:27.166060   57198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:49:27.340900   57198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:49:27.359946   57198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:49:27.362022   57198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:49:27.362302   57198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:49:27.515254   57198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:49:24.177809   56845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:49:24.188175   56845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:49:24.208060   56845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:49:24.208152   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.208209   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-093615 minikube.k8s.io/updated_at=2024_08_12T11_49_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7 minikube.k8s.io/name=embed-certs-093615 minikube.k8s.io/primary=true
	I0812 11:49:24.393211   56845 ops.go:34] apiserver oom_adj: -16
	I0812 11:49:24.393296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:24.894092   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.394229   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.893667   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.394057   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:26.893509   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.394296   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:27.893453   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:25.441104   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:27.517314   57198 out.go:204]   - Booting up control plane ...
	I0812 11:49:27.517444   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:49:27.523528   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:49:27.524732   57198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:49:27.525723   57198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:49:27.527868   57198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 11:49:27.702461   57616 pod_ready.go:102] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:28.202582   57616 pod_ready.go:92] pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.202608   57616 pod_ready.go:81] duration metric: took 2.507391262s for pod "coredns-6f6b679f8f-shfmr" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.202621   57616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207529   57616 pod_ready.go:92] pod "etcd-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.207551   57616 pod_ready.go:81] duration metric: took 4.923206ms for pod "etcd-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.207560   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212760   57616 pod_ready.go:92] pod "kube-apiserver-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.212794   57616 pod_ready.go:81] duration metric: took 5.223592ms for pod "kube-apiserver-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.212807   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.216970   57616 pod_ready.go:92] pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.216993   57616 pod_ready.go:81] duration metric: took 4.177186ms for pod "kube-controller-manager-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.217004   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221078   57616 pod_ready.go:92] pod "kube-proxy-8jwkz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.221096   57616 pod_ready.go:81] duration metric: took 4.085629ms for pod "kube-proxy-8jwkz" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.221105   57616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600004   57616 pod_ready.go:92] pod "kube-scheduler-no-preload-993542" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:28.600031   57616 pod_ready.go:81] duration metric: took 378.92044ms for pod "kube-scheduler-no-preload-993542" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:28.600039   57616 pod_ready.go:38] duration metric: took 9.924247425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:28.600053   57616 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:28.600102   57616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:28.615007   57616 api_server.go:72] duration metric: took 10.231634381s to wait for apiserver process to appear ...
	I0812 11:49:28.615043   57616 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:28.615063   57616 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8443/healthz ...
	I0812 11:49:28.620301   57616 api_server.go:279] https://192.168.61.148:8443/healthz returned 200:
	ok
	I0812 11:49:28.621814   57616 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 11:49:28.621843   57616 api_server.go:131] duration metric: took 6.792657ms to wait for apiserver health ...
	I0812 11:49:28.621858   57616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:28.804172   57616 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:28.804204   57616 system_pods.go:61] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:28.804208   57616 system_pods.go:61] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:28.804213   57616 system_pods.go:61] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:28.804216   57616 system_pods.go:61] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:28.804219   57616 system_pods.go:61] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:28.804224   57616 system_pods.go:61] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:28.804227   57616 system_pods.go:61] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:28.804232   57616 system_pods.go:61] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:28.804236   57616 system_pods.go:61] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:28.804244   57616 system_pods.go:74] duration metric: took 182.379622ms to wait for pod list to return data ...
	I0812 11:49:28.804251   57616 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:28.999537   57616 default_sa.go:45] found service account: "default"
	I0812 11:49:28.999571   57616 default_sa.go:55] duration metric: took 195.31354ms for default service account to be created ...
	I0812 11:49:28.999582   57616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:29.205266   57616 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:29.205296   57616 system_pods.go:89] "coredns-6f6b679f8f-2gc2z" [4d5375c0-6f19-40b7-98bc-50d4ef45fd93] Running
	I0812 11:49:29.205301   57616 system_pods.go:89] "coredns-6f6b679f8f-shfmr" [6fd90de8-af9e-4b43-9fa7-b503a00e9845] Running
	I0812 11:49:29.205306   57616 system_pods.go:89] "etcd-no-preload-993542" [c3144e52-830b-47f1-913d-e44880368ee4] Running
	I0812 11:49:29.205310   57616 system_pods.go:89] "kube-apiserver-no-preload-993542" [73061d9a-d3cd-421a-bbd5-7bfe221d8729] Running
	I0812 11:49:29.205315   57616 system_pods.go:89] "kube-controller-manager-no-preload-993542" [0999e6c2-30b8-4d53-9420-6a00757eb9d4] Running
	I0812 11:49:29.205319   57616 system_pods.go:89] "kube-proxy-8jwkz" [43501e17-fde3-4468-a170-e64a58088ec2] Running
	I0812 11:49:29.205323   57616 system_pods.go:89] "kube-scheduler-no-preload-993542" [edaa4d82-7994-4052-ba5b-5729c543c006] Running
	I0812 11:49:29.205329   57616 system_pods.go:89] "metrics-server-6867b74b74-25zg8" [70d17780-d4bc-4df4-93ac-bb74c1fa50f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:29.205335   57616 system_pods.go:89] "storage-provisioner" [beb7a321-e575-44e5-8d10-3749d1285806] Running
	I0812 11:49:29.205342   57616 system_pods.go:126] duration metric: took 205.754437ms to wait for k8s-apps to be running ...
	I0812 11:49:29.205348   57616 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:29.205390   57616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:29.220297   57616 system_svc.go:56] duration metric: took 14.940181ms WaitForService to wait for kubelet
	I0812 11:49:29.220343   57616 kubeadm.go:582] duration metric: took 10.836962086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:29.220369   57616 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:29.400598   57616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:29.400634   57616 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:29.400648   57616 node_conditions.go:105] duration metric: took 180.272764ms to run NodePressure ...
	I0812 11:49:29.400663   57616 start.go:241] waiting for startup goroutines ...
	I0812 11:49:29.400675   57616 start.go:246] waiting for cluster config update ...
	I0812 11:49:29.400691   57616 start.go:255] writing updated cluster config ...
	I0812 11:49:29.401086   57616 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:29.454975   57616 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 11:49:29.457349   57616 out.go:177] * Done! kubectl is now configured to use "no-preload-993542" cluster and "default" namespace by default
	I0812 11:49:28.394104   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:28.894284   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.393380   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:29.893417   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.394034   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:30.893668   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.394322   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.894069   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.393691   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:32.893944   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:31.517192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:33.393880   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:33.894126   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.393857   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:34.893356   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.394181   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:35.894116   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.393690   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:36.893650   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.394325   56845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:49:37.524187   56845 kubeadm.go:1113] duration metric: took 13.316085022s to wait for elevateKubeSystemPrivileges
	I0812 11:49:37.524225   56845 kubeadm.go:394] duration metric: took 5m12.500523071s to StartCluster
	I0812 11:49:37.524246   56845 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.524334   56845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:49:37.526822   56845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:49:37.527125   56845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.191 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:49:37.527189   56845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:49:37.527272   56845 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-093615"
	I0812 11:49:37.527285   56845 addons.go:69] Setting default-storageclass=true in profile "embed-certs-093615"
	I0812 11:49:37.527307   56845 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-093615"
	I0812 11:49:37.527307   56845 config.go:182] Loaded profile config "embed-certs-093615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0812 11:49:37.527315   56845 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:49:37.527318   56845 addons.go:69] Setting metrics-server=true in profile "embed-certs-093615"
	I0812 11:49:37.527337   56845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-093615"
	I0812 11:49:37.527345   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527362   56845 addons.go:234] Setting addon metrics-server=true in "embed-certs-093615"
	W0812 11:49:37.527375   56845 addons.go:243] addon metrics-server should already be in state true
	I0812 11:49:37.527413   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527791   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527816   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527798   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.527769   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.527928   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.528806   56845 out.go:177] * Verifying Kubernetes components...
	I0812 11:49:37.530366   56845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:49:37.544367   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0812 11:49:37.544919   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0812 11:49:37.545052   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545492   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.545535   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.545551   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546095   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.546220   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.546247   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.546267   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.547090   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.547667   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.547697   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.548008   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0812 11:49:37.550024   56845 addons.go:234] Setting addon default-storageclass=true in "embed-certs-093615"
	W0812 11:49:37.550048   56845 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:49:37.550079   56845 host.go:66] Checking if "embed-certs-093615" exists ...
	I0812 11:49:37.550469   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.550500   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.550728   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.551342   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.551373   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.551748   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.552314   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.552354   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.566505   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0812 11:49:37.567085   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.567510   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.567526   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.567900   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.568133   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.570307   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.571789   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0812 11:49:37.572127   56845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:49:37.572191   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.572730   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.572752   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.573044   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0812 11:49:37.573231   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.573619   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.573815   56845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:49:37.573840   56845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:49:37.573849   56845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.573870   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:49:37.573890   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.574787   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.574809   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.575722   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.575937   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.578054   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578069   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.578536   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.578565   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.578833   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.579012   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.579170   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.579326   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.580007   56845 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:49:37.581298   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:49:37.581313   56845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:49:37.581334   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.585114   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585809   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.585839   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.585914   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.586160   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.586338   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.586476   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.591678   56845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0812 11:49:37.592146   56845 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:49:37.592684   56845 main.go:141] libmachine: Using API Version  1
	I0812 11:49:37.592702   56845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:49:37.593075   56845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:49:37.593241   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetState
	I0812 11:49:37.595117   56845 main.go:141] libmachine: (embed-certs-093615) Calling .DriverName
	I0812 11:49:37.595398   56845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.595413   56845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:49:37.595430   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHHostname
	I0812 11:49:37.598417   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.598771   56845 main.go:141] libmachine: (embed-certs-093615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:eb:0c", ip: ""} in network mk-embed-certs-093615: {Iface:virbr3 ExpiryTime:2024-08-12 12:44:10 +0000 UTC Type:0 Mac:52:54:00:a2:eb:0c Iaid: IPaddr:192.168.72.191 Prefix:24 Hostname:embed-certs-093615 Clientid:01:52:54:00:a2:eb:0c}
	I0812 11:49:37.598792   56845 main.go:141] libmachine: (embed-certs-093615) DBG | domain embed-certs-093615 has defined IP address 192.168.72.191 and MAC address 52:54:00:a2:eb:0c in network mk-embed-certs-093615
	I0812 11:49:37.599008   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHPort
	I0812 11:49:37.599209   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHKeyPath
	I0812 11:49:37.599369   56845 main.go:141] libmachine: (embed-certs-093615) Calling .GetSSHUsername
	I0812 11:49:37.599507   56845 sshutil.go:53] new ssh client: &{IP:192.168.72.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/embed-certs-093615/id_rsa Username:docker}
	I0812 11:49:37.757714   56845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:49:37.783594   56845 node_ready.go:35] waiting up to 6m0s for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801679   56845 node_ready.go:49] node "embed-certs-093615" has status "Ready":"True"
	I0812 11:49:37.801707   56845 node_ready.go:38] duration metric: took 18.078817ms for node "embed-certs-093615" to be "Ready" ...
	I0812 11:49:37.801719   56845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:37.814704   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:37.860064   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:49:37.913642   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:49:37.913673   56845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:49:37.932638   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:49:37.948027   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:49:37.948052   56845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:49:38.000773   56845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.000805   56845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:49:38.050478   56845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:49:38.655431   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655458   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655477   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655460   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655760   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655875   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655888   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655897   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655792   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.655971   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.655979   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.655986   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.655812   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.655832   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656156   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656161   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656172   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.656199   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.656225   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.656231   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707240   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.707268   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.707596   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.707618   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.707667   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.832725   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.832758   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833072   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833114   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833134   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833155   56845 main.go:141] libmachine: Making call to close driver server
	I0812 11:49:38.833165   56845 main.go:141] libmachine: (embed-certs-093615) Calling .Close
	I0812 11:49:38.833416   56845 main.go:141] libmachine: (embed-certs-093615) DBG | Closing plugin on server side
	I0812 11:49:38.833461   56845 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:49:38.833472   56845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:49:38.833483   56845 addons.go:475] Verifying addon metrics-server=true in "embed-certs-093615"
	I0812 11:49:38.835319   56845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:49:34.589171   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:38.836977   56845 addons.go:510] duration metric: took 1.309786928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:49:39.827672   56845 pod_ready.go:102] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"False"
	I0812 11:49:40.820793   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.820818   56845 pod_ready.go:81] duration metric: took 3.006078866s for pod "coredns-7db6d8ff4d-cjbwn" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.820828   56845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825674   56845 pod_ready.go:92] pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.825696   56845 pod_ready.go:81] duration metric: took 4.862671ms for pod "coredns-7db6d8ff4d-zcpcc" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.825705   56845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830668   56845 pod_ready.go:92] pod "etcd-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.830690   56845 pod_ready.go:81] duration metric: took 4.979449ms for pod "etcd-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.830699   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834732   56845 pod_ready.go:92] pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.834750   56845 pod_ready.go:81] duration metric: took 4.044023ms for pod "kube-apiserver-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.834759   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838476   56845 pod_ready.go:92] pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:40.838493   56845 pod_ready.go:81] duration metric: took 3.728686ms for pod "kube-controller-manager-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:40.838502   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219756   56845 pod_ready.go:92] pod "kube-proxy-26xvl" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.219778   56845 pod_ready.go:81] duration metric: took 381.271425ms for pod "kube-proxy-26xvl" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.219789   56845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619078   56845 pod_ready.go:92] pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace has status "Ready":"True"
	I0812 11:49:41.619107   56845 pod_ready.go:81] duration metric: took 399.30989ms for pod "kube-scheduler-embed-certs-093615" in "kube-system" namespace to be "Ready" ...
	I0812 11:49:41.619117   56845 pod_ready.go:38] duration metric: took 3.817386457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:49:41.619135   56845 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:49:41.619197   56845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:49:41.634452   56845 api_server.go:72] duration metric: took 4.107285578s to wait for apiserver process to appear ...
	I0812 11:49:41.634480   56845 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:49:41.634505   56845 api_server.go:253] Checking apiserver healthz at https://192.168.72.191:8443/healthz ...
	I0812 11:49:41.639610   56845 api_server.go:279] https://192.168.72.191:8443/healthz returned 200:
	ok
	I0812 11:49:41.640514   56845 api_server.go:141] control plane version: v1.30.3
	I0812 11:49:41.640537   56845 api_server.go:131] duration metric: took 6.049802ms to wait for apiserver health ...
	I0812 11:49:41.640547   56845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:49:41.823614   56845 system_pods.go:59] 9 kube-system pods found
	I0812 11:49:41.823652   56845 system_pods.go:61] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:41.823659   56845 system_pods.go:61] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:41.823665   56845 system_pods.go:61] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:41.823670   56845 system_pods.go:61] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:41.823675   56845 system_pods.go:61] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:41.823680   56845 system_pods.go:61] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:41.823685   56845 system_pods.go:61] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:41.823693   56845 system_pods.go:61] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:41.823697   56845 system_pods.go:61] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:41.823704   56845 system_pods.go:74] duration metric: took 183.151482ms to wait for pod list to return data ...
	I0812 11:49:41.823711   56845 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:49:42.017840   56845 default_sa.go:45] found service account: "default"
	I0812 11:49:42.017870   56845 default_sa.go:55] duration metric: took 194.151916ms for default service account to be created ...
	I0812 11:49:42.017886   56845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:49:42.222050   56845 system_pods.go:86] 9 kube-system pods found
	I0812 11:49:42.222084   56845 system_pods.go:89] "coredns-7db6d8ff4d-cjbwn" [ec8ff679-9b23-481d-b8c5-207b54e7e5ea] Running
	I0812 11:49:42.222092   56845 system_pods.go:89] "coredns-7db6d8ff4d-zcpcc" [ed76b19c-cd96-4754-ae07-08a2a0b91387] Running
	I0812 11:49:42.222098   56845 system_pods.go:89] "etcd-embed-certs-093615" [853d7fe8-00c2-434f-b88a-2b37e1608906] Running
	I0812 11:49:42.222104   56845 system_pods.go:89] "kube-apiserver-embed-certs-093615" [983122d1-800a-4991-96f8-29ae69ea7166] Running
	I0812 11:49:42.222110   56845 system_pods.go:89] "kube-controller-manager-embed-certs-093615" [b9eceb97-a4bd-43e2-a115-c483c9131fa7] Running
	I0812 11:49:42.222116   56845 system_pods.go:89] "kube-proxy-26xvl" [cacdea2f-2ce2-43ab-8e3e-104a7a40d027] Running
	I0812 11:49:42.222122   56845 system_pods.go:89] "kube-scheduler-embed-certs-093615" [b5653b7a-db54-4584-ab69-1232a9c58d9c] Running
	I0812 11:49:42.222133   56845 system_pods.go:89] "metrics-server-569cc877fc-kwk6t" [5817f68c-ab3e-4b50-acf1-8d56d25dcbcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:49:42.222140   56845 system_pods.go:89] "storage-provisioner" [c29d9422-fc62-4536-974b-70ba940152c2] Running
	I0812 11:49:42.222157   56845 system_pods.go:126] duration metric: took 204.263322ms to wait for k8s-apps to be running ...
	I0812 11:49:42.222169   56845 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:49:42.222224   56845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:49:42.235891   56845 system_svc.go:56] duration metric: took 13.715083ms WaitForService to wait for kubelet
	I0812 11:49:42.235920   56845 kubeadm.go:582] duration metric: took 4.708757648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:49:42.235945   56845 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:49:42.418727   56845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:49:42.418761   56845 node_conditions.go:123] node cpu capacity is 2
	I0812 11:49:42.418773   56845 node_conditions.go:105] duration metric: took 182.823582ms to run NodePressure ...
	I0812 11:49:42.418789   56845 start.go:241] waiting for startup goroutines ...
	I0812 11:49:42.418799   56845 start.go:246] waiting for cluster config update ...
	I0812 11:49:42.418812   56845 start.go:255] writing updated cluster config ...
	I0812 11:49:42.419150   56845 ssh_runner.go:195] Run: rm -f paused
	I0812 11:49:42.468981   56845 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:49:42.471931   56845 out.go:177] * Done! kubectl is now configured to use "embed-certs-093615" cluster and "default" namespace by default
	I0812 11:49:40.669207   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:43.741090   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:49.821138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:52.893281   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:49:58.973141   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:02.045165   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:08.129133   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:07.530363   57198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 11:50:07.530652   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:07.530821   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:11.197137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:12.531246   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:12.531502   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:17.277119   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:20.349149   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:22.532192   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:22.532372   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:26.429100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:29.501158   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:35.581137   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:38.653143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:42.533597   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:50:42.533815   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:50:44.733130   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:47.805192   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:53.885100   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:50:56.957154   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:03.037201   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:06.109079   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:12.189138   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:15.261132   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:22.535173   57198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 11:51:22.535490   57198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 11:51:22.535516   57198 kubeadm.go:310] 
	I0812 11:51:22.535573   57198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 11:51:22.535625   57198 kubeadm.go:310] 		timed out waiting for the condition
	I0812 11:51:22.535646   57198 kubeadm.go:310] 
	I0812 11:51:22.535692   57198 kubeadm.go:310] 	This error is likely caused by:
	I0812 11:51:22.535728   57198 kubeadm.go:310] 		- The kubelet is not running
	I0812 11:51:22.535855   57198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 11:51:22.535870   57198 kubeadm.go:310] 
	I0812 11:51:22.535954   57198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 11:51:22.535985   57198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 11:51:22.536028   57198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 11:51:22.536038   57198 kubeadm.go:310] 
	I0812 11:51:22.536168   57198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 11:51:22.536276   57198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 11:51:22.536290   57198 kubeadm.go:310] 
	I0812 11:51:22.536440   57198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 11:51:22.536532   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 11:51:22.536610   57198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 11:51:22.536692   57198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 11:51:22.536701   57198 kubeadm.go:310] 
	I0812 11:51:22.537300   57198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:51:22.537416   57198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 11:51:22.537516   57198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 11:51:22.537602   57198 kubeadm.go:394] duration metric: took 7m56.533771451s to StartCluster
	I0812 11:51:22.537650   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:51:22.537769   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:51:22.583654   57198 cri.go:89] found id: ""
	I0812 11:51:22.583679   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.583686   57198 logs.go:278] No container was found matching "kube-apiserver"
	I0812 11:51:22.583692   57198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:51:22.583739   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:51:22.619477   57198 cri.go:89] found id: ""
	I0812 11:51:22.619510   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.619521   57198 logs.go:278] No container was found matching "etcd"
	I0812 11:51:22.619528   57198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:51:22.619586   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:51:22.653038   57198 cri.go:89] found id: ""
	I0812 11:51:22.653068   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.653078   57198 logs.go:278] No container was found matching "coredns"
	I0812 11:51:22.653085   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:51:22.653149   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:51:22.686106   57198 cri.go:89] found id: ""
	I0812 11:51:22.686134   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.686142   57198 logs.go:278] No container was found matching "kube-scheduler"
	I0812 11:51:22.686148   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:51:22.686196   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:51:22.723533   57198 cri.go:89] found id: ""
	I0812 11:51:22.723560   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.723567   57198 logs.go:278] No container was found matching "kube-proxy"
	I0812 11:51:22.723572   57198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:51:22.723629   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:51:22.767355   57198 cri.go:89] found id: ""
	I0812 11:51:22.767382   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.767390   57198 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 11:51:22.767395   57198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:51:22.767472   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:51:22.807472   57198 cri.go:89] found id: ""
	I0812 11:51:22.807509   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.807522   57198 logs.go:278] No container was found matching "kindnet"
	I0812 11:51:22.807530   57198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0812 11:51:22.807604   57198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0812 11:51:22.842565   57198 cri.go:89] found id: ""
	I0812 11:51:22.842594   57198 logs.go:276] 0 containers: []
	W0812 11:51:22.842603   57198 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0812 11:51:22.842615   57198 logs.go:123] Gathering logs for kubelet ...
	I0812 11:51:22.842629   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:51:22.894638   57198 logs.go:123] Gathering logs for dmesg ...
	I0812 11:51:22.894677   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:51:22.907871   57198 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:51:22.907902   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 11:51:22.989089   57198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 11:51:22.989114   57198 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:51:22.989126   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:51:23.114659   57198 logs.go:123] Gathering logs for container status ...
	I0812 11:51:23.114713   57198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 11:51:23.168124   57198 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 11:51:23.168182   57198 out.go:239] * 
	W0812 11:51:23.168252   57198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.168284   57198 out.go:239] * 
	W0812 11:51:23.169113   57198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 11:51:23.173151   57198 out.go:177] 
	W0812 11:51:23.174712   57198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 11:51:23.174762   57198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 11:51:23.174782   57198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 11:51:23.176508   57198 out.go:177] 
	I0812 11:51:21.341126   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:24.413107   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:30.493143   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:33.569122   59908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.114:22: connect: no route to host
	I0812 11:51:36.569554   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:51:36.569591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.569943   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:51:36.569973   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:51:36.570201   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:51:36.571680   59908 machine.go:97] duration metric: took 4m37.426765365s to provisionDockerMachine
	I0812 11:51:36.571724   59908 fix.go:56] duration metric: took 4m37.448153773s for fixHost
	I0812 11:51:36.571736   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 4m37.448177825s
	W0812 11:51:36.571759   59908 start.go:714] error starting host: provision: host is not running
	W0812 11:51:36.571863   59908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0812 11:51:36.571879   59908 start.go:729] Will try again in 5 seconds ...
	I0812 11:51:41.573924   59908 start.go:360] acquireMachinesLock for default-k8s-diff-port-581883: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:51:41.574052   59908 start.go:364] duration metric: took 85.852µs to acquireMachinesLock for "default-k8s-diff-port-581883"
	I0812 11:51:41.574082   59908 start.go:96] Skipping create...Using existing machine configuration
	I0812 11:51:41.574092   59908 fix.go:54] fixHost starting: 
	I0812 11:51:41.574362   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:51:41.574405   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:51:41.589947   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0812 11:51:41.590440   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:51:41.590917   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:51:41.590937   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:51:41.591264   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:51:41.591434   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:51:41.591577   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:51:41.593079   59908 fix.go:112] recreateIfNeeded on default-k8s-diff-port-581883: state=Stopped err=<nil>
	I0812 11:51:41.593104   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	W0812 11:51:41.593250   59908 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 11:51:41.595246   59908 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-581883" ...
	I0812 11:51:41.596770   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Start
	I0812 11:51:41.596979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring networks are active...
	I0812 11:51:41.598006   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network default is active
	I0812 11:51:41.598500   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Ensuring network mk-default-k8s-diff-port-581883 is active
	I0812 11:51:41.598920   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Getting domain xml...
	I0812 11:51:41.599684   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Creating domain...
	I0812 11:51:42.863317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting to get IP...
	I0812 11:51:42.864358   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864816   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:42.864907   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:42.864802   61181 retry.go:31] will retry after 220.174363ms: waiting for machine to come up
	I0812 11:51:43.086204   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086832   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.086861   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.086783   61181 retry.go:31] will retry after 342.897936ms: waiting for machine to come up
	I0812 11:51:43.431059   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.431584   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.431497   61181 retry.go:31] will retry after 465.154278ms: waiting for machine to come up
	I0812 11:51:43.898042   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898580   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:43.898604   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:43.898518   61181 retry.go:31] will retry after 498.287765ms: waiting for machine to come up
	I0812 11:51:44.398086   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398736   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:44.398763   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:44.398682   61181 retry.go:31] will retry after 617.809106ms: waiting for machine to come up
	I0812 11:51:45.018733   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.019307   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.019217   61181 retry.go:31] will retry after 864.46319ms: waiting for machine to come up
	I0812 11:51:45.885081   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885555   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:45.885585   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:45.885529   61181 retry.go:31] will retry after 1.067767105s: waiting for machine to come up
	I0812 11:51:46.954710   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955061   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:46.955087   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:46.955020   61181 retry.go:31] will retry after 927.472236ms: waiting for machine to come up
	I0812 11:51:47.883766   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:47.884216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:47.884146   61181 retry.go:31] will retry after 1.493170608s: waiting for machine to come up
	I0812 11:51:49.378898   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:49.379350   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:49.379297   61181 retry.go:31] will retry after 1.599397392s: waiting for machine to come up
	I0812 11:51:50.981013   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981714   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:50.981745   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:50.981642   61181 retry.go:31] will retry after 1.779019847s: waiting for machine to come up
	I0812 11:51:52.762246   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:52.762707   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:52.762629   61181 retry.go:31] will retry after 3.410620248s: waiting for machine to come up
	I0812 11:51:56.175010   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175542   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | unable to find current IP address of domain default-k8s-diff-port-581883 in network mk-default-k8s-diff-port-581883
	I0812 11:51:56.175573   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | I0812 11:51:56.175490   61181 retry.go:31] will retry after 3.890343984s: waiting for machine to come up
	I0812 11:52:00.069904   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070591   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has current primary IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.070606   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Found IP for machine: 192.168.50.114
	I0812 11:52:00.070616   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserving static IP address...
	I0812 11:52:00.071153   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Reserved static IP address: 192.168.50.114
	I0812 11:52:00.071183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Waiting for SSH to be available...
	I0812 11:52:00.071206   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.071228   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | skip adding static IP to network mk-default-k8s-diff-port-581883 - found existing host DHCP lease matching {name: "default-k8s-diff-port-581883", mac: "52:54:00:76:2f:ab", ip: "192.168.50.114"}
	I0812 11:52:00.071242   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Getting to WaitForSSH function...
	I0812 11:52:00.073315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073647   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.073676   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.073838   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH client type: external
	I0812 11:52:00.073868   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa (-rw-------)
	I0812 11:52:00.073909   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:52:00.073926   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | About to run SSH command:
	I0812 11:52:00.073941   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | exit 0
	I0812 11:52:00.201064   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | SSH cmd err, output: <nil>: 
	I0812 11:52:00.201417   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetConfigRaw
	I0812 11:52:00.202026   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.204566   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.204855   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.204895   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.205179   59908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/config.json ...
	I0812 11:52:00.205369   59908 machine.go:94] provisionDockerMachine start ...
	I0812 11:52:00.205387   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:00.205698   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.208214   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.208656   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.208749   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.208932   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209111   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.209227   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.209359   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.209519   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.209529   59908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 11:52:00.317075   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 11:52:00.317106   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317394   59908 buildroot.go:166] provisioning hostname "default-k8s-diff-port-581883"
	I0812 11:52:00.317427   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.317617   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.320809   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321256   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.321297   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.321415   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.321625   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321793   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.321927   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.322174   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.322337   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.322350   59908 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-581883 && echo "default-k8s-diff-port-581883" | sudo tee /etc/hostname
	I0812 11:52:00.448512   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-581883
	
	I0812 11:52:00.448544   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.451372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.451915   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.451942   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.452144   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:00.452341   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452510   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:00.452661   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:00.452823   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:00.453021   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:00.453038   59908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-581883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-581883/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-581883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:52:00.569754   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:52:00.569791   59908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 11:52:00.569808   59908 buildroot.go:174] setting up certificates
	I0812 11:52:00.569818   59908 provision.go:84] configureAuth start
	I0812 11:52:00.569829   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetMachineName
	I0812 11:52:00.570114   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:00.572834   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.573357   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.573549   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:00.576212   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576670   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:00.576717   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:00.576915   59908 provision.go:143] copyHostCerts
	I0812 11:52:00.576979   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 11:52:00.576989   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 11:52:00.577051   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 11:52:00.577148   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 11:52:00.577157   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 11:52:00.577184   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 11:52:00.577241   59908 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 11:52:00.577248   59908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 11:52:00.577270   59908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 11:52:00.577366   59908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-581883 san=[127.0.0.1 192.168.50.114 default-k8s-diff-port-581883 localhost minikube]
	I0812 11:52:01.053674   59908 provision.go:177] copyRemoteCerts
	I0812 11:52:01.053733   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:52:01.053756   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.056305   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.056840   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.056894   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.057105   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.057325   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.057486   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.057641   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.142765   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0812 11:52:01.168430   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:52:01.193360   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 11:52:01.218125   59908 provision.go:87] duration metric: took 648.29686ms to configureAuth
	I0812 11:52:01.218151   59908 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:52:01.218337   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:01.218432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.221497   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.221858   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.221887   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.222077   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.222261   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222436   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.222596   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.222736   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.222963   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.222986   59908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:52:01.490986   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:52:01.491013   59908 machine.go:97] duration metric: took 1.285630113s to provisionDockerMachine
	I0812 11:52:01.491026   59908 start.go:293] postStartSetup for "default-k8s-diff-port-581883" (driver="kvm2")
	I0812 11:52:01.491038   59908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:52:01.491054   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.491385   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:52:01.491414   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.494451   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.494830   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.494881   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.495025   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.495216   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.495372   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.495522   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.579756   59908 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:52:01.583802   59908 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:52:01.583828   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 11:52:01.583952   59908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 11:52:01.584051   59908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 11:52:01.584167   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 11:52:01.593940   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:01.619301   59908 start.go:296] duration metric: took 128.258855ms for postStartSetup
	I0812 11:52:01.619343   59908 fix.go:56] duration metric: took 20.045251384s for fixHost
	I0812 11:52:01.619365   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.622507   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.622917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.622954   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.623116   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.623308   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623461   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.623623   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.623803   59908 main.go:141] libmachine: Using SSH client type: native
	I0812 11:52:01.624015   59908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0812 11:52:01.624031   59908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 11:52:01.733552   59908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723463521.708750952
	
	I0812 11:52:01.733588   59908 fix.go:216] guest clock: 1723463521.708750952
	I0812 11:52:01.733613   59908 fix.go:229] Guest: 2024-08-12 11:52:01.708750952 +0000 UTC Remote: 2024-08-12 11:52:01.619347823 +0000 UTC m=+302.640031526 (delta=89.403129ms)
	I0812 11:52:01.733639   59908 fix.go:200] guest clock delta is within tolerance: 89.403129ms
	I0812 11:52:01.733646   59908 start.go:83] releasing machines lock for "default-k8s-diff-port-581883", held for 20.15958144s
	I0812 11:52:01.733673   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.733971   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:01.736957   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737359   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.737388   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.737569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738113   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738315   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:01.738404   59908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:52:01.738444   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.738710   59908 ssh_runner.go:195] Run: cat /version.json
	I0812 11:52:01.738746   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:01.741424   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.741906   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.741935   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742092   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742120   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:01.742293   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:01.742317   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742487   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742501   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:01.742693   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:01.742709   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.742854   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:01.821742   59908 ssh_runner.go:195] Run: systemctl --version
	I0812 11:52:01.854649   59908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:52:01.994050   59908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:52:02.000754   59908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:52:02.000848   59908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:52:02.017212   59908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:52:02.017240   59908 start.go:495] detecting cgroup driver to use...
	I0812 11:52:02.017310   59908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:52:02.035650   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:52:02.050036   59908 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:52:02.050114   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:52:02.063916   59908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:52:02.078938   59908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:52:02.194945   59908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:52:02.366538   59908 docker.go:233] disabling docker service ...
	I0812 11:52:02.366616   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:52:02.380648   59908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:52:02.393284   59908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:52:02.513560   59908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:52:02.638028   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:52:02.662395   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:52:02.683732   59908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:52:02.683798   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.695379   59908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:52:02.695437   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.706905   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.718338   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.729708   59908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:52:02.740127   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.750198   59908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.766470   59908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:52:02.777845   59908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:52:02.788254   59908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:52:02.788322   59908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:52:02.800552   59908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:52:02.809932   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:02.950568   59908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:52:03.087957   59908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:52:03.088031   59908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:52:03.094543   59908 start.go:563] Will wait 60s for crictl version
	I0812 11:52:03.094597   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:52:03.098447   59908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:52:03.139477   59908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:52:03.139561   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.169931   59908 ssh_runner.go:195] Run: crio --version
	I0812 11:52:03.202808   59908 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:52:03.203979   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetIP
	I0812 11:52:03.206641   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207046   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:03.207078   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:03.207300   59908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 11:52:03.211169   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:03.222676   59908 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:52:03.222798   59908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:52:03.222835   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:03.258003   59908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 11:52:03.258074   59908 ssh_runner.go:195] Run: which lz4
	I0812 11:52:03.261945   59908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 11:52:03.266002   59908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:52:03.266035   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 11:52:04.616538   59908 crio.go:462] duration metric: took 1.354621946s to copy over tarball
	I0812 11:52:04.616600   59908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:52:06.801880   59908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.185257635s)
	I0812 11:52:06.801905   59908 crio.go:469] duration metric: took 2.18534207s to extract the tarball
	I0812 11:52:06.801912   59908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:52:06.840167   59908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:52:06.887647   59908 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:52:06.887669   59908 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:52:06.887677   59908 kubeadm.go:934] updating node { 192.168.50.114 8444 v1.30.3 crio true true} ...
	I0812 11:52:06.887780   59908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-581883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:52:06.887863   59908 ssh_runner.go:195] Run: crio config
	I0812 11:52:06.944347   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:06.944372   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:06.944385   59908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:52:06.944404   59908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-581883 NodeName:default-k8s-diff-port-581883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:52:06.944582   59908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-581883"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:52:06.944660   59908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:52:06.954792   59908 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:52:06.954853   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:52:06.964625   59908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0812 11:52:06.981467   59908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:52:06.998649   59908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0812 11:52:07.017062   59908 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0812 11:52:07.020710   59908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:52:07.033442   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:07.164673   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:07.183526   59908 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883 for IP: 192.168.50.114
	I0812 11:52:07.183574   59908 certs.go:194] generating shared ca certs ...
	I0812 11:52:07.183598   59908 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:07.183769   59908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 11:52:07.183813   59908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 11:52:07.183827   59908 certs.go:256] generating profile certs ...
	I0812 11:52:07.183948   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/client.key
	I0812 11:52:07.184117   59908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key.ebc625f3
	I0812 11:52:07.184198   59908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key
	I0812 11:52:07.184361   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 11:52:07.184402   59908 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 11:52:07.184416   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 11:52:07.184448   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 11:52:07.184478   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:52:07.184509   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 11:52:07.184562   59908 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 11:52:07.185388   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:52:07.217465   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:52:07.248781   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:52:07.278177   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:52:07.313023   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 11:52:07.336720   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:52:07.360266   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:52:07.388850   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/default-k8s-diff-port-581883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 11:52:07.413532   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 11:52:07.438304   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 11:52:07.462084   59908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:52:07.486176   59908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:52:07.504165   59908 ssh_runner.go:195] Run: openssl version
	I0812 11:52:07.510273   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 11:52:07.520671   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525096   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.525158   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 11:52:07.531038   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 11:52:07.542971   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 11:52:07.554939   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559868   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.559928   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 11:52:07.565655   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 11:52:07.578139   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:52:07.589333   59908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594679   59908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.594755   59908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:52:07.600616   59908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:52:07.612028   59908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:52:07.617247   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 11:52:07.623826   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 11:52:07.630443   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 11:52:07.637184   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 11:52:07.643723   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 11:52:07.650269   59908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 11:52:07.657049   59908 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-581883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-581883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:52:07.657136   59908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:52:07.657218   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.695064   59908 cri.go:89] found id: ""
	I0812 11:52:07.695136   59908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:52:07.705707   59908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 11:52:07.705725   59908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 11:52:07.705781   59908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 11:52:07.715748   59908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:52:07.717230   59908 kubeconfig.go:125] found "default-k8s-diff-port-581883" server: "https://192.168.50.114:8444"
	I0812 11:52:07.720217   59908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 11:52:07.730557   59908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.114
	I0812 11:52:07.730596   59908 kubeadm.go:1160] stopping kube-system containers ...
	I0812 11:52:07.730609   59908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 11:52:07.730672   59908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:52:07.766039   59908 cri.go:89] found id: ""
	I0812 11:52:07.766114   59908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 11:52:07.784359   59908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:52:07.794750   59908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:52:07.794781   59908 kubeadm.go:157] found existing configuration files:
	
	I0812 11:52:07.794957   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0812 11:52:07.805063   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:52:07.805137   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:52:07.815283   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0812 11:52:07.825460   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:52:07.825535   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:52:07.836322   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.846381   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:52:07.846438   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:52:07.856471   59908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0812 11:52:07.866349   59908 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:52:07.866415   59908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:52:07.876379   59908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:52:07.886723   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:07.993071   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.756027   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:08.978821   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.048377   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:09.146562   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:52:09.146658   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:09.647073   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.147700   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:10.647212   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.147702   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:52:11.174640   59908 api_server.go:72] duration metric: took 2.028079757s to wait for apiserver process to appear ...
	I0812 11:52:11.174665   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:52:11.174698   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:11.175152   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:11.674838   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:16.675764   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:16.675832   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:21.676084   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:21.676129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:26.676483   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:26.676531   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.676994   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:31.677032   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:31.841007   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": read tcp 192.168.50.1:45150->192.168.50.114:8444: read: connection reset by peer
	I0812 11:52:32.175501   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:32.176109   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": dial tcp 192.168.50.114:8444: connect: connection refused
	I0812 11:52:32.675714   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:37.676528   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:37.676575   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:42.677744   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:42.677782   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:47.679062   59908 api_server.go:269] stopped: https://192.168.50.114:8444/healthz: Get "https://192.168.50.114:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0812 11:52:47.679139   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.075690   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.075722   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.075736   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.231100   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 11:52:50.231129   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 11:52:50.231143   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.273525   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.273564   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:50.675005   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:50.681580   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:50.681621   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.175129   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.188048   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.188075   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:51.675218   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:51.684784   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:51.684822   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.175465   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.179666   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.179686   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:52.675234   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:52.680948   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:52.680972   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.175533   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.180849   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.180889   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:53.675084   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:53.680320   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:53.680352   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.175057   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.180061   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.180087   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:54.675117   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:54.679922   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:54.679950   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.175569   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.179883   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 11:52:55.179908   59908 api_server.go:103] status: https://192.168.50.114:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 11:52:55.675522   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:52:55.680182   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:52:55.686477   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:52:55.686505   59908 api_server.go:131] duration metric: took 44.511833813s to wait for apiserver health ...
	I0812 11:52:55.686513   59908 cni.go:84] Creating CNI manager for ""
	I0812 11:52:55.686519   59908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:52:55.688415   59908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:52:55.689745   59908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:52:55.700910   59908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:52:55.719588   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:52:55.729581   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:52:55.729622   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 11:52:55.729630   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:52:55.729640   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 11:52:55.729651   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 11:52:55.729662   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0812 11:52:55.729673   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:52:55.729682   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:52:55.729693   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 11:52:55.729702   59908 system_pods.go:74] duration metric: took 10.095218ms to wait for pod list to return data ...
	I0812 11:52:55.729712   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:52:55.733812   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:52:55.733841   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:52:55.733857   59908 node_conditions.go:105] duration metric: took 4.136436ms to run NodePressure ...
	I0812 11:52:55.733877   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 11:52:56.014193   59908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026600   59908 kubeadm.go:739] kubelet initialised
	I0812 11:52:56.026629   59908 kubeadm.go:740] duration metric: took 12.405458ms waiting for restarted kubelet to initialise ...
	I0812 11:52:56.026637   59908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:56.031669   59908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.042499   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042526   59908 pod_ready.go:81] duration metric: took 10.82967ms for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.042537   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.042547   59908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.048265   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048290   59908 pod_ready.go:81] duration metric: took 5.732651ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.048307   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.048315   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.054613   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054639   59908 pod_ready.go:81] duration metric: took 6.314697ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.054652   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.054660   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.125380   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125418   59908 pod_ready.go:81] duration metric: took 70.74807ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.125433   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.125441   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.523216   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523251   59908 pod_ready.go:81] duration metric: took 397.801141ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.523263   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-proxy-h6fzz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.523272   59908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:56.923229   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923269   59908 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:56.923285   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:56.923295   59908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:52:57.323846   59908 pod_ready.go:97] node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323877   59908 pod_ready.go:81] duration metric: took 400.572011ms for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:52:57.323888   59908 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-581883" hosting pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:52:57.323896   59908 pod_ready.go:38] duration metric: took 1.297248784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:52:57.323911   59908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:52:57.336325   59908 ops.go:34] apiserver oom_adj: -16
	I0812 11:52:57.336345   59908 kubeadm.go:597] duration metric: took 49.630615077s to restartPrimaryControlPlane
	I0812 11:52:57.336365   59908 kubeadm.go:394] duration metric: took 49.67932273s to StartCluster
	I0812 11:52:57.336380   59908 settings.go:142] acquiring lock: {Name:mk4060151bda1f131d6abfbbd19b6a8ab3b5e774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.336447   59908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:52:57.338064   59908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/kubeconfig: {Name:mke68f1d372d8e5baa69199b426efaec54860499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:52:57.338331   59908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:52:57.338433   59908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 11:52:57.338521   59908 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338536   59908 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:52:57.338551   59908 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338587   59908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-581883"
	I0812 11:52:57.338558   59908 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338662   59908 addons.go:243] addon storage-provisioner should already be in state true
	I0812 11:52:57.338695   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.338563   59908 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-581883"
	I0812 11:52:57.338755   59908 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.338764   59908 addons.go:243] addon metrics-server should already be in state true
	I0812 11:52:57.338788   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.339032   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339033   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339035   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.339067   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339084   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.339065   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.340300   59908 out.go:177] * Verifying Kubernetes components...
	I0812 11:52:57.342119   59908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:52:57.356069   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0812 11:52:57.356172   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0812 11:52:57.356610   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.356723   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.357168   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357189   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357329   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.357356   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.357543   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.357718   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.358105   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358143   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.358331   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.358367   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.360134   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0812 11:52:57.360536   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.361016   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.361041   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.361371   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.361569   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.365260   59908 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-581883"
	W0812 11:52:57.365279   59908 addons.go:243] addon default-storageclass should already be in state true
	I0812 11:52:57.365312   59908 host.go:66] Checking if "default-k8s-diff-port-581883" exists ...
	I0812 11:52:57.365596   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.365639   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.377488   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0812 11:52:57.378076   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.378581   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0812 11:52:57.378657   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.378680   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.378965   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.379025   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.379251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.379656   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.379683   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.380105   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.380391   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.382273   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.382496   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.383601   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0812 11:52:57.384062   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.384739   59908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:52:57.384750   59908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0812 11:52:57.384914   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.384940   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.385293   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.385956   59908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:52:57.386002   59908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:52:57.386314   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:52:57.386336   59908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:52:57.386355   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.386386   59908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.386398   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:52:57.386416   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.390135   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390335   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.390669   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.390729   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391183   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391187   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.391251   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.391393   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391432   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.391571   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391592   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.391722   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.391758   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.391921   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.431097   59908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0812 11:52:57.431600   59908 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:52:57.432116   59908 main.go:141] libmachine: Using API Version  1
	I0812 11:52:57.432140   59908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:52:57.432506   59908 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:52:57.432702   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetState
	I0812 11:52:57.434513   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .DriverName
	I0812 11:52:57.434753   59908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.434772   59908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:52:57.434791   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHHostname
	I0812 11:52:57.438433   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.438917   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:2f:ab", ip: ""} in network mk-default-k8s-diff-port-581883: {Iface:virbr2 ExpiryTime:2024-08-12 12:51:52 +0000 UTC Type:0 Mac:52:54:00:76:2f:ab Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:default-k8s-diff-port-581883 Clientid:01:52:54:00:76:2f:ab}
	I0812 11:52:57.438951   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | domain default-k8s-diff-port-581883 has defined IP address 192.168.50.114 and MAC address 52:54:00:76:2f:ab in network mk-default-k8s-diff-port-581883
	I0812 11:52:57.439150   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHPort
	I0812 11:52:57.439384   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHKeyPath
	I0812 11:52:57.439574   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .GetSSHUsername
	I0812 11:52:57.439744   59908 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/default-k8s-diff-port-581883/id_rsa Username:docker}
	I0812 11:52:57.547325   59908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:52:57.566163   59908 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:52:57.633469   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:52:57.641330   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:52:57.641355   59908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0812 11:52:57.662909   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:52:57.691294   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:52:57.691321   59908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:52:57.746668   59908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:57.746693   59908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:52:57.787970   59908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628134   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628106   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628195   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628464   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628481   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628490   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628498   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628611   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628626   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.628647   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.628651   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628655   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.628775   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.628785   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.628791   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.630407   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.630424   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.634739   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.634759   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.635034   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.635053   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643171   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643191   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643484   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643502   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643511   59908 main.go:141] libmachine: Making call to close driver server
	I0812 11:52:58.643520   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) Calling .Close
	I0812 11:52:58.643532   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643732   59908 main.go:141] libmachine: (default-k8s-diff-port-581883) DBG | Closing plugin on server side
	I0812 11:52:58.643754   59908 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:52:58.643762   59908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:52:58.643771   59908 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-581883"
	I0812 11:52:58.645811   59908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0812 11:52:58.647443   59908 addons.go:510] duration metric: took 1.309010451s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0812 11:52:59.569732   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:01.570136   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:04.069965   59908 node_ready.go:53] node "default-k8s-diff-port-581883" has status "Ready":"False"
	I0812 11:53:05.570009   59908 node_ready.go:49] node "default-k8s-diff-port-581883" has status "Ready":"True"
	I0812 11:53:05.570039   59908 node_ready.go:38] duration metric: took 8.003840242s for node "default-k8s-diff-port-581883" to be "Ready" ...
	I0812 11:53:05.570050   59908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:53:05.577206   59908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:07.584071   59908 pod_ready.go:102] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:08.583523   59908 pod_ready.go:92] pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.583550   59908 pod_ready.go:81] duration metric: took 3.006317399s for pod "coredns-7db6d8ff4d-86flr" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.583559   59908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589137   59908 pod_ready.go:92] pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.589163   59908 pod_ready.go:81] duration metric: took 5.595854ms for pod "etcd-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.589175   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593746   59908 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.593767   59908 pod_ready.go:81] duration metric: took 4.585829ms for pod "kube-apiserver-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.593776   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598058   59908 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.598078   59908 pod_ready.go:81] duration metric: took 4.296254ms for pod "kube-controller-manager-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.598087   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603106   59908 pod_ready.go:92] pod "kube-proxy-h6fzz" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.603127   59908 pod_ready.go:81] duration metric: took 5.033938ms for pod "kube-proxy-h6fzz" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.603136   59908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981404   59908 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace has status "Ready":"True"
	I0812 11:53:08.981429   59908 pod_ready.go:81] duration metric: took 378.286388ms for pod "kube-scheduler-default-k8s-diff-port-581883" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:08.981439   59908 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	I0812 11:53:10.988175   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:13.488230   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:15.987639   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:18.487540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:20.490803   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:22.987167   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:25.488840   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:27.988661   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:30.487605   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:32.487748   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:34.488109   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:36.987016   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:38.987165   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:40.989187   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:43.487407   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:45.487714   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:47.487961   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:49.988540   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:52.487216   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:54.487433   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:56.487958   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:53:58.489095   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:00.987353   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:02.989138   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:05.488174   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:07.988702   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:10.488396   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:12.988099   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:14.988220   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:16.988395   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:19.491228   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:21.987397   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:23.987898   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:26.487993   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:28.489384   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:30.989371   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:33.488670   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:35.987526   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:37.988823   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:40.488488   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:42.488612   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:44.989023   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:46.990079   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:49.488206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:51.488446   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:53.988007   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:56.488200   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:54:58.490348   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:00.988756   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:03.487527   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:05.987624   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:07.989990   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:10.487888   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:12.488656   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:14.489648   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:16.988551   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:19.488408   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:21.988902   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:24.487895   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:26.988377   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:29.488082   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:31.986995   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:33.987359   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:35.989125   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:38.489945   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:40.493189   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:42.988399   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:45.487307   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:47.487758   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:49.487798   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:51.987795   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:53.988376   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:55.990060   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:55:58.487684   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:00.487893   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:02.988185   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:04.988436   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:07.487867   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:09.987976   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:11.988078   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:13.988354   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:15.988676   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:18.488658   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:20.987780   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:23.486965   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:25.487065   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:27.487891   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:29.488825   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:31.988732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:34.487771   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:36.988555   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:39.489154   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:41.987687   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:43.990010   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:45.991210   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:48.487381   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:50.987943   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:53.487657   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:55.987206   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:57.988164   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:56:59.990098   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:02.486732   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:04.488492   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:06.987443   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988727   59908 pod_ready.go:102] pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace has status "Ready":"False"
	I0812 11:57:08.988756   59908 pod_ready.go:81] duration metric: took 4m0.007310185s for pod "metrics-server-569cc877fc-wcpgl" in "kube-system" namespace to be "Ready" ...
	E0812 11:57:08.988768   59908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0812 11:57:08.988777   59908 pod_ready.go:38] duration metric: took 4m3.418715457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:57:08.988795   59908 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:57:08.988823   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:08.988909   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:09.035203   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.035230   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.035236   59908 cri.go:89] found id: ""
	I0812 11:57:09.035244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:09.035298   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.039940   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.044354   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:09.044430   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:09.079692   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:09.079716   59908 cri.go:89] found id: ""
	I0812 11:57:09.079725   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:09.079788   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.084499   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:09.084576   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:09.124721   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:09.124750   59908 cri.go:89] found id: ""
	I0812 11:57:09.124761   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:09.124828   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.128921   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:09.128997   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:09.164960   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.164982   59908 cri.go:89] found id: ""
	I0812 11:57:09.164995   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:09.165046   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.169043   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:09.169116   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:09.211298   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.211322   59908 cri.go:89] found id: ""
	I0812 11:57:09.211329   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:09.211377   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.215348   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:09.215440   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:09.269500   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.269519   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.269523   59908 cri.go:89] found id: ""
	I0812 11:57:09.269530   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:09.269575   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.273724   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.277660   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:09.277732   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:09.327668   59908 cri.go:89] found id: ""
	I0812 11:57:09.327691   59908 logs.go:276] 0 containers: []
	W0812 11:57:09.327698   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:09.327703   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:09.327765   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:09.363936   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.363957   59908 cri.go:89] found id: ""
	I0812 11:57:09.363964   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:09.364010   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:09.368123   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:09.368151   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:09.441676   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:09.441725   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:09.483275   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:09.483317   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:09.544504   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:09.544539   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:09.594808   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:09.594839   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:09.636141   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:09.636178   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:09.673996   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:09.674023   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:09.711480   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:09.711504   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:09.747830   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:09.747861   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:10.268559   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:10.268607   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:10.394461   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:10.394495   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:10.439760   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:10.439796   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:10.474457   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:10.474496   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:10.515430   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:10.515464   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.029229   59908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:57:13.045764   59908 api_server.go:72] duration metric: took 4m15.707395821s to wait for apiserver process to appear ...
	I0812 11:57:13.045795   59908 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:57:13.045832   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:13.045878   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:13.082792   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:13.082818   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.082824   59908 cri.go:89] found id: ""
	I0812 11:57:13.082833   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:13.082893   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.087987   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.092188   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:13.092251   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:13.135193   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:13.135226   59908 cri.go:89] found id: ""
	I0812 11:57:13.135237   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:13.135293   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.140269   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:13.140344   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:13.193436   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.193458   59908 cri.go:89] found id: ""
	I0812 11:57:13.193465   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:13.193539   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.198507   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:13.198589   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:13.241696   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:13.241718   59908 cri.go:89] found id: ""
	I0812 11:57:13.241725   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:13.241773   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.246865   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:13.246937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:13.293284   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.293308   59908 cri.go:89] found id: ""
	I0812 11:57:13.293315   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:13.293380   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.297698   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:13.297772   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:13.342737   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.342757   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:13.342760   59908 cri.go:89] found id: ""
	I0812 11:57:13.342767   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:13.342809   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.347634   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.351733   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:13.351794   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:13.394540   59908 cri.go:89] found id: ""
	I0812 11:57:13.394570   59908 logs.go:276] 0 containers: []
	W0812 11:57:13.394580   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:13.394594   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:13.394647   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:13.433910   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:13.433934   59908 cri.go:89] found id: ""
	I0812 11:57:13.433944   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:13.434001   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:13.437999   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:13.438024   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:13.451945   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:13.451973   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:13.561957   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:13.561990   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:13.602729   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:13.602754   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:13.673729   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:13.673766   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:13.714814   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:13.714843   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:13.755876   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:13.755902   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:13.814263   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:13.814301   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:14.305206   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:14.305243   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:14.349455   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:14.349486   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:14.399731   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:14.399765   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:14.443494   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:14.443524   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:14.486034   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:14.486070   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:14.524991   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:14.525018   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.062314   59908 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8444/healthz ...
	I0812 11:57:17.068363   59908 api_server.go:279] https://192.168.50.114:8444/healthz returned 200:
	ok
	I0812 11:57:17.069818   59908 api_server.go:141] control plane version: v1.30.3
	I0812 11:57:17.069845   59908 api_server.go:131] duration metric: took 4.024042567s to wait for apiserver health ...
	I0812 11:57:17.069856   59908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 11:57:17.069882   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:57:17.069937   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:57:17.107213   59908 cri.go:89] found id: "87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:17.107233   59908 cri.go:89] found id: "399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.107237   59908 cri.go:89] found id: ""
	I0812 11:57:17.107244   59908 logs.go:276] 2 containers: [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1]
	I0812 11:57:17.107297   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.117678   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.121897   59908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:57:17.121962   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:57:17.159450   59908 cri.go:89] found id: "a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.159480   59908 cri.go:89] found id: ""
	I0812 11:57:17.159489   59908 logs.go:276] 1 containers: [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126]
	I0812 11:57:17.159548   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.164078   59908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:57:17.164156   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:57:17.207977   59908 cri.go:89] found id: "72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.208002   59908 cri.go:89] found id: ""
	I0812 11:57:17.208010   59908 logs.go:276] 1 containers: [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4]
	I0812 11:57:17.208063   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.212055   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:57:17.212136   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:57:17.259289   59908 cri.go:89] found id: "3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.259316   59908 cri.go:89] found id: ""
	I0812 11:57:17.259327   59908 logs.go:276] 1 containers: [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804]
	I0812 11:57:17.259393   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.263818   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:57:17.263896   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:57:17.301371   59908 cri.go:89] found id: "b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.301404   59908 cri.go:89] found id: ""
	I0812 11:57:17.301413   59908 logs.go:276] 1 containers: [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26]
	I0812 11:57:17.301473   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.306038   59908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:57:17.306100   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:57:17.343982   59908 cri.go:89] found id: "b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.344006   59908 cri.go:89] found id: "f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.344017   59908 cri.go:89] found id: ""
	I0812 11:57:17.344027   59908 logs.go:276] 2 containers: [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f]
	I0812 11:57:17.344086   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.348135   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.352720   59908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:57:17.352790   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:57:17.392647   59908 cri.go:89] found id: ""
	I0812 11:57:17.392673   59908 logs.go:276] 0 containers: []
	W0812 11:57:17.392682   59908 logs.go:278] No container was found matching "kindnet"
	I0812 11:57:17.392687   59908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0812 11:57:17.392740   59908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0812 11:57:17.429067   59908 cri.go:89] found id: "3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.429088   59908 cri.go:89] found id: ""
	I0812 11:57:17.429095   59908 logs.go:276] 1 containers: [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c]
	I0812 11:57:17.429140   59908 ssh_runner.go:195] Run: which crictl
	I0812 11:57:17.433406   59908 logs.go:123] Gathering logs for etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] ...
	I0812 11:57:17.433433   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126"
	I0812 11:57:17.479091   59908 logs.go:123] Gathering logs for container status ...
	I0812 11:57:17.479123   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:57:17.519579   59908 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:57:17.519614   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:57:17.620109   59908 logs.go:123] Gathering logs for kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] ...
	I0812 11:57:17.620143   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1"
	I0812 11:57:17.659604   59908 logs.go:123] Gathering logs for kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] ...
	I0812 11:57:17.659639   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f"
	I0812 11:57:17.712850   59908 logs.go:123] Gathering logs for kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] ...
	I0812 11:57:17.712901   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f"
	I0812 11:57:17.750567   59908 logs.go:123] Gathering logs for kubelet ...
	I0812 11:57:17.750595   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 11:57:17.822429   59908 logs.go:123] Gathering logs for coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] ...
	I0812 11:57:17.822459   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4"
	I0812 11:57:17.864303   59908 logs.go:123] Gathering logs for kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] ...
	I0812 11:57:17.864338   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804"
	I0812 11:57:17.904307   59908 logs.go:123] Gathering logs for kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] ...
	I0812 11:57:17.904340   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26"
	I0812 11:57:17.939073   59908 logs.go:123] Gathering logs for storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] ...
	I0812 11:57:17.939103   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c"
	I0812 11:57:17.982222   59908 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:57:17.982253   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:57:18.369007   59908 logs.go:123] Gathering logs for dmesg ...
	I0812 11:57:18.369053   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:57:18.385187   59908 logs.go:123] Gathering logs for kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] ...
	I0812 11:57:18.385219   59908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98"
	I0812 11:57:20.949075   59908 system_pods.go:59] 8 kube-system pods found
	I0812 11:57:20.949110   59908 system_pods.go:61] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.949115   59908 system_pods.go:61] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.949119   59908 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.949122   59908 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.949125   59908 system_pods.go:61] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.949128   59908 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.949133   59908 system_pods.go:61] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.949139   59908 system_pods.go:61] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.949146   59908 system_pods.go:74] duration metric: took 3.879283024s to wait for pod list to return data ...
	I0812 11:57:20.949153   59908 default_sa.go:34] waiting for default service account to be created ...
	I0812 11:57:20.951355   59908 default_sa.go:45] found service account: "default"
	I0812 11:57:20.951376   59908 default_sa.go:55] duration metric: took 2.217928ms for default service account to be created ...
	I0812 11:57:20.951383   59908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 11:57:20.956479   59908 system_pods.go:86] 8 kube-system pods found
	I0812 11:57:20.956505   59908 system_pods.go:89] "coredns-7db6d8ff4d-86flr" [703201f6-ba92-45f7-b273-ee508cf51e2b] Running
	I0812 11:57:20.956513   59908 system_pods.go:89] "etcd-default-k8s-diff-port-581883" [98074b68-6274-4496-8fd3-7bad8b59b063] Running
	I0812 11:57:20.956519   59908 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-581883" [3f9d02cd-8b6f-4640-98e2-ebc5145444ea] Running
	I0812 11:57:20.956527   59908 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-581883" [b6c17f8f-18eb-41e6-9ef6-bab882066d51] Running
	I0812 11:57:20.956532   59908 system_pods.go:89] "kube-proxy-h6fzz" [b0f6bcc8-263a-4b23-a60b-c67475a868bf] Running
	I0812 11:57:20.956537   59908 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-581883" [3b8e21a4-9578-40fc-be22-8a469b5e9ff2] Running
	I0812 11:57:20.956546   59908 system_pods.go:89] "metrics-server-569cc877fc-wcpgl" [11f6c813-ebc1-4712-b758-cb08ff921d77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 11:57:20.956553   59908 system_pods.go:89] "storage-provisioner" [93affc3b-a4e7-4c19-824c-3eec33616acc] Running
	I0812 11:57:20.956564   59908 system_pods.go:126] duration metric: took 5.175002ms to wait for k8s-apps to be running ...
	I0812 11:57:20.956572   59908 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 11:57:20.956624   59908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:57:20.971826   59908 system_svc.go:56] duration metric: took 15.246626ms WaitForService to wait for kubelet
	I0812 11:57:20.971856   59908 kubeadm.go:582] duration metric: took 4m23.633490244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:57:20.971881   59908 node_conditions.go:102] verifying NodePressure condition ...
	I0812 11:57:20.974643   59908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 11:57:20.974661   59908 node_conditions.go:123] node cpu capacity is 2
	I0812 11:57:20.974671   59908 node_conditions.go:105] duration metric: took 2.785ms to run NodePressure ...
	I0812 11:57:20.974681   59908 start.go:241] waiting for startup goroutines ...
	I0812 11:57:20.974688   59908 start.go:246] waiting for cluster config update ...
	I0812 11:57:20.974700   59908 start.go:255] writing updated cluster config ...
	I0812 11:57:20.975043   59908 ssh_runner.go:195] Run: rm -f paused
	I0812 11:57:21.025000   59908 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 11:57:21.028153   59908 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-581883" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.450314124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464144450289118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=925fb8c8-2424-4a15-be38-7b61ec9b8663 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.450899848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46c10d85-6edc-4c84-b4ee-4c44ebd6ae3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.450955087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46c10d85-6edc-4c84-b4ee-4c44ebd6ae3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.450996421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=46c10d85-6edc-4c84-b4ee-4c44ebd6ae3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.483503098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12edf056-aa05-4db8-bca1-7e02ac2390ca name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.483595811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12edf056-aa05-4db8-bca1-7e02ac2390ca name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.484671500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08969383-3551-46c9-ac38-5ed69579c465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.485084705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464144485063346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08969383-3551-46c9-ac38-5ed69579c465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.485875941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3272d51-abfa-4bbc-8c7d-8042cfa75a4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.485927999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3272d51-abfa-4bbc-8c7d-8042cfa75a4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.485977609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d3272d51-abfa-4bbc-8c7d-8042cfa75a4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.521166052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bb15057-0608-4b2f-8ce9-67357ae01195 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.521246810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bb15057-0608-4b2f-8ce9-67357ae01195 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.522694671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b828ba5-4da9-4a21-a431-8c22fdf1f9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.523122047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464144523079827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b828ba5-4da9-4a21-a431-8c22fdf1f9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.523717490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2b36a2b-a813-4021-a39f-59da453efbd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.523789657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2b36a2b-a813-4021-a39f-59da453efbd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.523823494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f2b36a2b-a813-4021-a39f-59da453efbd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.558386644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4386639-5002-4f28-b3dd-87377d88694a name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.558535590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4386639-5002-4f28-b3dd-87377d88694a name=/runtime.v1.RuntimeService/Version
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.559888366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=522fcb4a-979f-490b-a219-d0f0a2ae17ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.560288919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464144560267968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=522fcb4a-979f-490b-a219-d0f0a2ae17ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.560781370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf3dc21b-bd7a-448e-84dc-90cbdd8131a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.560857349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf3dc21b-bd7a-448e-84dc-90cbdd8131a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:02:24 old-k8s-version-835962 crio[649]: time="2024-08-12 12:02:24.560897050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf3dc21b-bd7a-448e-84dc-90cbdd8131a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug12 11:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.558019] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.216104] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.055590] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052853] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.197707] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.118940] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.224588] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.260019] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.065050] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.865114] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +14.292569] kauditd_printk_skb: 46 callbacks suppressed
	[Aug12 11:47] systemd-fstab-generator[5053]: Ignoring "noauto" option for root device
	[Aug12 11:49] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.063898] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:02:24 up 19 min,  0 users,  load average: 0.05, 0.03, 0.01
	Linux old-k8s-version-835962 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: net.(*sysDialer).dialSerial(0xc0007f0c80, 0x4f7fe40, 0xc000c3b6e0, 0xc0007da5c0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: net.(*Dialer).DialContext(0xc0008d6480, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1aab0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008d9520, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1aab0, 0x24, 0x60, 0x7fba3365fd80, 0x118, ...)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: net/http.(*Transport).dial(0xc00083c780, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1aab0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: net/http.(*Transport).dialConn(0xc00083c780, 0x4f7fe00, 0xc000120018, 0x0, 0xc0009ab7a0, 0x5, 0xc000c1aab0, 0x24, 0x0, 0xc0009ad440, ...)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: net/http.(*Transport).dialConnFor(0xc00083c780, 0xc000a11d90)
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]: created by net/http.(*Transport).queueForDial
	Aug 12 12:02:23 old-k8s-version-835962 kubelet[6787]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 12 12:02:23 old-k8s-version-835962 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 12 12:02:23 old-k8s-version-835962 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 12 12:02:24 old-k8s-version-835962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Aug 12 12:02:24 old-k8s-version-835962 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 12 12:02:24 old-k8s-version-835962 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 12 12:02:24 old-k8s-version-835962 kubelet[6824]: I0812 12:02:24.412506    6824 server.go:416] Version: v1.20.0
	Aug 12 12:02:24 old-k8s-version-835962 kubelet[6824]: I0812 12:02:24.412758    6824 server.go:837] Client rotation is on, will bootstrap in background
	Aug 12 12:02:24 old-k8s-version-835962 kubelet[6824]: I0812 12:02:24.414740    6824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 12 12:02:24 old-k8s-version-835962 kubelet[6824]: I0812 12:02:24.416921    6824 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 12 12:02:24 old-k8s-version-835962 kubelet[6824]: W0812 12:02:24.417233    6824 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 2 (226.695321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-835962" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (118.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (167.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0812 12:06:24.814886   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:24.820255   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:24.830673   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:24.851036   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:24.891957   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:24.972372   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:25.132839   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:25.453088   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:26.093908   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:27.374093   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-12 12:09:09.334909242 +0000 UTC m=+6538.491831825
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-581883 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.576µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-581883 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-581883 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-581883 logs -n 25: (1.296817041s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-824402 sudo                        | custom-flannel-824402     | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-824402 sudo                        | custom-flannel-824402     | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-824402                             | custom-flannel-824402     | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| start   | -p bridge-824402 --memory=3072                       | bridge-824402             | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402 sudo cat                | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402 sudo cat                | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402 sudo cat                | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-824402                         | enable-default-cni-824402 | jenkins | v1.33.1 | 12 Aug 24 12:08 UTC | 12 Aug 24 12:08 UTC |
	| ssh     | -p flannel-824402 pgrep -a                           | flannel-824402            | jenkins | v1.33.1 | 12 Aug 24 12:09 UTC | 12 Aug 24 12:09 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:08:30.914924   74546 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:08:30.915114   74546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:08:30.915127   74546 out.go:304] Setting ErrFile to fd 2...
	I0812 12:08:30.915134   74546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:08:30.915453   74546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 12:08:30.916312   74546 out.go:298] Setting JSON to false
	I0812 12:08:30.918565   74546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6652,"bootTime":1723457859,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:08:30.918656   74546 start.go:139] virtualization: kvm guest
	I0812 12:08:30.921425   74546 out.go:177] * [bridge-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:08:30.923292   74546 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 12:08:30.923311   74546 notify.go:220] Checking for updates...
	I0812 12:08:30.926544   74546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:08:30.928756   74546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 12:08:30.930526   74546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:08:30.931948   74546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:08:30.933486   74546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:08:30.935629   74546 config.go:182] Loaded profile config "default-k8s-diff-port-581883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:08:30.935771   74546 config.go:182] Loaded profile config "enable-default-cni-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:08:30.935879   74546 config.go:182] Loaded profile config "flannel-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:08:30.935996   74546 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:08:30.983182   74546 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:08:30.984478   74546 start.go:297] selected driver: kvm2
	I0812 12:08:30.984493   74546 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:08:30.984505   74546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:08:30.985307   74546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:08:30.985385   74546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:08:31.002802   74546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:08:31.002863   74546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:08:31.003102   74546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:08:31.003167   74546 cni.go:84] Creating CNI manager for "bridge"
	I0812 12:08:31.003182   74546 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 12:08:31.003258   74546 start.go:340] cluster config:
	{Name:bridge-824402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:08:31.003385   74546 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:08:31.005203   74546 out.go:177] * Starting "bridge-824402" primary control-plane node in "bridge-824402" cluster
	I0812 12:08:31.006659   74546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:08:31.006709   74546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:08:31.006722   74546 cache.go:56] Caching tarball of preloaded images
	I0812 12:08:31.006829   74546 preload.go:172] Found /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:08:31.006843   74546 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:08:31.006975   74546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/config.json ...
	I0812 12:08:31.007003   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/config.json: {Name:mk773bb0a252d99bcc4cef20d9563b444a4c6a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:31.007246   74546 start.go:360] acquireMachinesLock for bridge-824402: {Name:mkf1f3ff7da562a087a1e344d13e67c3c8140973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:08:31.007289   74546 start.go:364] duration metric: took 23.124µs to acquireMachinesLock for "bridge-824402"
	I0812 12:08:31.007311   74546 start.go:93] Provisioning new machine with config: &{Name:bridge-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:bridge-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:08:31.007402   74546 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:08:30.856255   71637 addons.go:510] duration metric: took 1.162890312s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 12:08:30.870515   71637 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-824402" context rescaled to 1 replicas
	I0812 12:08:32.367894   71637 node_ready.go:53] node "flannel-824402" has status "Ready":"False"
	I0812 12:08:31.009520   74546 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0812 12:08:31.009710   74546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:08:31.009764   74546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:08:31.026109   74546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36249
	I0812 12:08:31.026617   74546 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:08:31.027244   74546 main.go:141] libmachine: Using API Version  1
	I0812 12:08:31.027273   74546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:08:31.027692   74546 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:08:31.027993   74546 main.go:141] libmachine: (bridge-824402) Calling .GetMachineName
	I0812 12:08:31.028183   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:31.028382   74546 start.go:159] libmachine.API.Create for "bridge-824402" (driver="kvm2")
	I0812 12:08:31.028415   74546 client.go:168] LocalClient.Create starting
	I0812 12:08:31.028451   74546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem
	I0812 12:08:31.028495   74546 main.go:141] libmachine: Decoding PEM data...
	I0812 12:08:31.028525   74546 main.go:141] libmachine: Parsing certificate...
	I0812 12:08:31.028619   74546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem
	I0812 12:08:31.028665   74546 main.go:141] libmachine: Decoding PEM data...
	I0812 12:08:31.028682   74546 main.go:141] libmachine: Parsing certificate...
	I0812 12:08:31.028705   74546 main.go:141] libmachine: Running pre-create checks...
	I0812 12:08:31.028717   74546 main.go:141] libmachine: (bridge-824402) Calling .PreCreateCheck
	I0812 12:08:31.029109   74546 main.go:141] libmachine: (bridge-824402) Calling .GetConfigRaw
	I0812 12:08:31.029616   74546 main.go:141] libmachine: Creating machine...
	I0812 12:08:31.029634   74546 main.go:141] libmachine: (bridge-824402) Calling .Create
	I0812 12:08:31.029807   74546 main.go:141] libmachine: (bridge-824402) Creating KVM machine...
	I0812 12:08:31.031435   74546 main.go:141] libmachine: (bridge-824402) DBG | found existing default KVM network
	I0812 12:08:31.033523   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:31.033361   74577 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000270100}
	I0812 12:08:31.033570   74546 main.go:141] libmachine: (bridge-824402) DBG | created network xml: 
	I0812 12:08:31.033590   74546 main.go:141] libmachine: (bridge-824402) DBG | <network>
	I0812 12:08:31.033608   74546 main.go:141] libmachine: (bridge-824402) DBG |   <name>mk-bridge-824402</name>
	I0812 12:08:31.033616   74546 main.go:141] libmachine: (bridge-824402) DBG |   <dns enable='no'/>
	I0812 12:08:31.033624   74546 main.go:141] libmachine: (bridge-824402) DBG |   
	I0812 12:08:31.033633   74546 main.go:141] libmachine: (bridge-824402) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 12:08:31.033643   74546 main.go:141] libmachine: (bridge-824402) DBG |     <dhcp>
	I0812 12:08:31.033654   74546 main.go:141] libmachine: (bridge-824402) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 12:08:31.033665   74546 main.go:141] libmachine: (bridge-824402) DBG |     </dhcp>
	I0812 12:08:31.033686   74546 main.go:141] libmachine: (bridge-824402) DBG |   </ip>
	I0812 12:08:31.033700   74546 main.go:141] libmachine: (bridge-824402) DBG |   
	I0812 12:08:31.033707   74546 main.go:141] libmachine: (bridge-824402) DBG | </network>
	I0812 12:08:31.033719   74546 main.go:141] libmachine: (bridge-824402) DBG | 
	I0812 12:08:31.039620   74546 main.go:141] libmachine: (bridge-824402) DBG | trying to create private KVM network mk-bridge-824402 192.168.39.0/24...
	I0812 12:08:31.134197   74546 main.go:141] libmachine: (bridge-824402) DBG | private KVM network mk-bridge-824402 192.168.39.0/24 created
	I0812 12:08:31.134237   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:31.134186   74577 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:08:31.134250   74546 main.go:141] libmachine: (bridge-824402) Setting up store path in /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402 ...
	I0812 12:08:31.134265   74546 main.go:141] libmachine: (bridge-824402) Building disk image from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:08:31.134377   74546 main.go:141] libmachine: (bridge-824402) Downloading /home/jenkins/minikube-integration/19409-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:08:31.404709   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:31.404574   74577 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa...
	I0812 12:08:31.568365   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:31.568214   74577 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/bridge-824402.rawdisk...
	I0812 12:08:31.568401   74546 main.go:141] libmachine: (bridge-824402) DBG | Writing magic tar header
	I0812 12:08:31.568416   74546 main.go:141] libmachine: (bridge-824402) DBG | Writing SSH key tar header
	I0812 12:08:31.568509   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:31.568407   74577 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402 ...
	I0812 12:08:31.568622   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402
	I0812 12:08:31.568640   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube/machines
	I0812 12:08:31.568653   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402 (perms=drwx------)
	I0812 12:08:31.568667   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:08:31.568681   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774/.minikube (perms=drwxr-xr-x)
	I0812 12:08:31.568717   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 12:08:31.568736   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins/minikube-integration/19409-3774 (perms=drwxrwxr-x)
	I0812 12:08:31.568746   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19409-3774
	I0812 12:08:31.568761   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:08:31.568773   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:08:31.568785   74546 main.go:141] libmachine: (bridge-824402) DBG | Checking permissions on dir: /home
	I0812 12:08:31.568799   74546 main.go:141] libmachine: (bridge-824402) DBG | Skipping /home - not owner
	I0812 12:08:31.568809   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:08:31.568827   74546 main.go:141] libmachine: (bridge-824402) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:08:31.568837   74546 main.go:141] libmachine: (bridge-824402) Creating domain...
	I0812 12:08:31.570027   74546 main.go:141] libmachine: (bridge-824402) define libvirt domain using xml: 
	I0812 12:08:31.570064   74546 main.go:141] libmachine: (bridge-824402) <domain type='kvm'>
	I0812 12:08:31.570074   74546 main.go:141] libmachine: (bridge-824402)   <name>bridge-824402</name>
	I0812 12:08:31.570084   74546 main.go:141] libmachine: (bridge-824402)   <memory unit='MiB'>3072</memory>
	I0812 12:08:31.570102   74546 main.go:141] libmachine: (bridge-824402)   <vcpu>2</vcpu>
	I0812 12:08:31.570109   74546 main.go:141] libmachine: (bridge-824402)   <features>
	I0812 12:08:31.570117   74546 main.go:141] libmachine: (bridge-824402)     <acpi/>
	I0812 12:08:31.570123   74546 main.go:141] libmachine: (bridge-824402)     <apic/>
	I0812 12:08:31.570131   74546 main.go:141] libmachine: (bridge-824402)     <pae/>
	I0812 12:08:31.570138   74546 main.go:141] libmachine: (bridge-824402)     
	I0812 12:08:31.570146   74546 main.go:141] libmachine: (bridge-824402)   </features>
	I0812 12:08:31.570153   74546 main.go:141] libmachine: (bridge-824402)   <cpu mode='host-passthrough'>
	I0812 12:08:31.570160   74546 main.go:141] libmachine: (bridge-824402)   
	I0812 12:08:31.570172   74546 main.go:141] libmachine: (bridge-824402)   </cpu>
	I0812 12:08:31.570179   74546 main.go:141] libmachine: (bridge-824402)   <os>
	I0812 12:08:31.570185   74546 main.go:141] libmachine: (bridge-824402)     <type>hvm</type>
	I0812 12:08:31.570192   74546 main.go:141] libmachine: (bridge-824402)     <boot dev='cdrom'/>
	I0812 12:08:31.570199   74546 main.go:141] libmachine: (bridge-824402)     <boot dev='hd'/>
	I0812 12:08:31.570206   74546 main.go:141] libmachine: (bridge-824402)     <bootmenu enable='no'/>
	I0812 12:08:31.570217   74546 main.go:141] libmachine: (bridge-824402)   </os>
	I0812 12:08:31.570224   74546 main.go:141] libmachine: (bridge-824402)   <devices>
	I0812 12:08:31.570231   74546 main.go:141] libmachine: (bridge-824402)     <disk type='file' device='cdrom'>
	I0812 12:08:31.570245   74546 main.go:141] libmachine: (bridge-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/boot2docker.iso'/>
	I0812 12:08:31.570252   74546 main.go:141] libmachine: (bridge-824402)       <target dev='hdc' bus='scsi'/>
	I0812 12:08:31.570260   74546 main.go:141] libmachine: (bridge-824402)       <readonly/>
	I0812 12:08:31.570266   74546 main.go:141] libmachine: (bridge-824402)     </disk>
	I0812 12:08:31.570276   74546 main.go:141] libmachine: (bridge-824402)     <disk type='file' device='disk'>
	I0812 12:08:31.570285   74546 main.go:141] libmachine: (bridge-824402)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:08:31.570297   74546 main.go:141] libmachine: (bridge-824402)       <source file='/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/bridge-824402.rawdisk'/>
	I0812 12:08:31.570309   74546 main.go:141] libmachine: (bridge-824402)       <target dev='hda' bus='virtio'/>
	I0812 12:08:31.570316   74546 main.go:141] libmachine: (bridge-824402)     </disk>
	I0812 12:08:31.570323   74546 main.go:141] libmachine: (bridge-824402)     <interface type='network'>
	I0812 12:08:31.570332   74546 main.go:141] libmachine: (bridge-824402)       <source network='mk-bridge-824402'/>
	I0812 12:08:31.570353   74546 main.go:141] libmachine: (bridge-824402)       <model type='virtio'/>
	I0812 12:08:31.570362   74546 main.go:141] libmachine: (bridge-824402)     </interface>
	I0812 12:08:31.570369   74546 main.go:141] libmachine: (bridge-824402)     <interface type='network'>
	I0812 12:08:31.570377   74546 main.go:141] libmachine: (bridge-824402)       <source network='default'/>
	I0812 12:08:31.570384   74546 main.go:141] libmachine: (bridge-824402)       <model type='virtio'/>
	I0812 12:08:31.570392   74546 main.go:141] libmachine: (bridge-824402)     </interface>
	I0812 12:08:31.570407   74546 main.go:141] libmachine: (bridge-824402)     <serial type='pty'>
	I0812 12:08:31.570416   74546 main.go:141] libmachine: (bridge-824402)       <target port='0'/>
	I0812 12:08:31.570422   74546 main.go:141] libmachine: (bridge-824402)     </serial>
	I0812 12:08:31.570430   74546 main.go:141] libmachine: (bridge-824402)     <console type='pty'>
	I0812 12:08:31.570452   74546 main.go:141] libmachine: (bridge-824402)       <target type='serial' port='0'/>
	I0812 12:08:31.570461   74546 main.go:141] libmachine: (bridge-824402)     </console>
	I0812 12:08:31.570469   74546 main.go:141] libmachine: (bridge-824402)     <rng model='virtio'>
	I0812 12:08:31.570495   74546 main.go:141] libmachine: (bridge-824402)       <backend model='random'>/dev/random</backend>
	I0812 12:08:31.570514   74546 main.go:141] libmachine: (bridge-824402)     </rng>
	I0812 12:08:31.570531   74546 main.go:141] libmachine: (bridge-824402)     
	I0812 12:08:31.570535   74546 main.go:141] libmachine: (bridge-824402)     
	I0812 12:08:31.570541   74546 main.go:141] libmachine: (bridge-824402)   </devices>
	I0812 12:08:31.570545   74546 main.go:141] libmachine: (bridge-824402) </domain>
	I0812 12:08:31.570551   74546 main.go:141] libmachine: (bridge-824402) 
	I0812 12:08:31.576118   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:6e:ca:df in network default
	I0812 12:08:31.577035   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:31.577079   74546 main.go:141] libmachine: (bridge-824402) Ensuring networks are active...
	I0812 12:08:31.577997   74546 main.go:141] libmachine: (bridge-824402) Ensuring network default is active
	I0812 12:08:31.578442   74546 main.go:141] libmachine: (bridge-824402) Ensuring network mk-bridge-824402 is active
	I0812 12:08:31.579279   74546 main.go:141] libmachine: (bridge-824402) Getting domain xml...
	I0812 12:08:31.580322   74546 main.go:141] libmachine: (bridge-824402) Creating domain...
	I0812 12:08:33.035840   74546 main.go:141] libmachine: (bridge-824402) Waiting to get IP...
	I0812 12:08:33.036751   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:33.037322   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:33.037614   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:33.037231   74577 retry.go:31] will retry after 304.253727ms: waiting for machine to come up
	I0812 12:08:33.342603   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:33.343122   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:33.343148   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:33.343086   74577 retry.go:31] will retry after 265.121377ms: waiting for machine to come up
	I0812 12:08:33.610366   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:33.610879   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:33.610908   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:33.610865   74577 retry.go:31] will retry after 344.565882ms: waiting for machine to come up
	I0812 12:08:33.957526   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:33.958216   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:33.958246   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:33.958170   74577 retry.go:31] will retry after 608.413009ms: waiting for machine to come up
	I0812 12:08:34.568144   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:34.568978   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:34.569004   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:34.568930   74577 retry.go:31] will retry after 686.943954ms: waiting for machine to come up
	I0812 12:08:35.517179   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:35.519190   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:35.519214   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:35.519144   74577 retry.go:31] will retry after 585.607504ms: waiting for machine to come up
	I0812 12:08:34.368932   71637 node_ready.go:53] node "flannel-824402" has status "Ready":"False"
	I0812 12:08:36.868260   71637 node_ready.go:53] node "flannel-824402" has status "Ready":"False"
	I0812 12:08:36.106410   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:36.106979   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:36.107030   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:36.106922   74577 retry.go:31] will retry after 807.861182ms: waiting for machine to come up
	I0812 12:08:36.915983   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:36.916609   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:36.916638   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:36.916557   74577 retry.go:31] will retry after 1.217411034s: waiting for machine to come up
	I0812 12:08:38.135975   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:38.136447   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:38.136476   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:38.136391   74577 retry.go:31] will retry after 1.657388376s: waiting for machine to come up
	I0812 12:08:39.795914   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:39.796355   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:39.796380   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:39.796313   74577 retry.go:31] will retry after 2.019479042s: waiting for machine to come up
	I0812 12:08:38.368025   71637 node_ready.go:49] node "flannel-824402" has status "Ready":"True"
	I0812 12:08:38.368050   71637 node_ready.go:38] duration metric: took 8.003376196s for node "flannel-824402" to be "Ready" ...
	I0812 12:08:38.368059   71637 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:08:38.376959   71637 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:40.383803   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:42.383861   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:41.817032   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:41.817546   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:41.817575   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:41.817501   74577 retry.go:31] will retry after 2.687974527s: waiting for machine to come up
	I0812 12:08:44.508380   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:44.508935   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:44.508955   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:44.508894   74577 retry.go:31] will retry after 3.183393413s: waiting for machine to come up
	I0812 12:08:44.883515   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:47.383439   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:47.694058   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:47.694531   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find current IP address of domain bridge-824402 in network mk-bridge-824402
	I0812 12:08:47.694558   74546 main.go:141] libmachine: (bridge-824402) DBG | I0812 12:08:47.694494   74577 retry.go:31] will retry after 3.978996244s: waiting for machine to come up
	I0812 12:08:49.383608   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:51.383941   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:51.675761   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.676260   74546 main.go:141] libmachine: (bridge-824402) Found IP for machine: 192.168.39.247
	I0812 12:08:51.676288   74546 main.go:141] libmachine: (bridge-824402) Reserving static IP address...
	I0812 12:08:51.676302   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has current primary IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.676716   74546 main.go:141] libmachine: (bridge-824402) DBG | unable to find host DHCP lease matching {name: "bridge-824402", mac: "52:54:00:86:92:2b", ip: "192.168.39.247"} in network mk-bridge-824402
	I0812 12:08:51.758898   74546 main.go:141] libmachine: (bridge-824402) DBG | Getting to WaitForSSH function...
	I0812 12:08:51.758929   74546 main.go:141] libmachine: (bridge-824402) Reserved static IP address: 192.168.39.247
	I0812 12:08:51.758943   74546 main.go:141] libmachine: (bridge-824402) Waiting for SSH to be available...
	I0812 12:08:51.762137   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.762645   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:51.762747   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.762794   74546 main.go:141] libmachine: (bridge-824402) DBG | Using SSH client type: external
	I0812 12:08:51.762807   74546 main.go:141] libmachine: (bridge-824402) DBG | Using SSH private key: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa (-rw-------)
	I0812 12:08:51.762837   74546 main.go:141] libmachine: (bridge-824402) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:08:51.762848   74546 main.go:141] libmachine: (bridge-824402) DBG | About to run SSH command:
	I0812 12:08:51.762862   74546 main.go:141] libmachine: (bridge-824402) DBG | exit 0
	I0812 12:08:51.897022   74546 main.go:141] libmachine: (bridge-824402) DBG | SSH cmd err, output: <nil>: 
	I0812 12:08:51.897274   74546 main.go:141] libmachine: (bridge-824402) KVM machine creation complete!
	I0812 12:08:51.897596   74546 main.go:141] libmachine: (bridge-824402) Calling .GetConfigRaw
	I0812 12:08:51.898130   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:51.898325   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:51.898519   74546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:08:51.898535   74546 main.go:141] libmachine: (bridge-824402) Calling .GetState
	I0812 12:08:51.900267   74546 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:08:51.900284   74546 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:08:51.900292   74546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:08:51.900297   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:51.902580   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.902956   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:51.902985   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:51.903110   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:51.903309   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:51.903530   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:51.903683   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:51.903919   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:51.904084   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:51.904095   74546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:08:52.012973   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:08:52.013037   74546 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:08:52.013045   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.016085   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.016516   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.016556   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.016698   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:52.016927   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.017102   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.017255   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:52.017451   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:52.017677   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:52.017692   74546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:08:52.129772   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:08:52.129872   74546 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:08:52.129887   74546 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:08:52.129899   74546 main.go:141] libmachine: (bridge-824402) Calling .GetMachineName
	I0812 12:08:52.130248   74546 buildroot.go:166] provisioning hostname "bridge-824402"
	I0812 12:08:52.130278   74546 main.go:141] libmachine: (bridge-824402) Calling .GetMachineName
	I0812 12:08:52.130503   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.133407   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.133892   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.133936   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.134109   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:52.134353   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.134512   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.134633   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:52.134831   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:52.135009   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:52.135020   74546 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-824402 && echo "bridge-824402" | sudo tee /etc/hostname
	I0812 12:08:52.260692   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-824402
	
	I0812 12:08:52.260717   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.263720   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.264114   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.264151   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.264380   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:52.264585   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.264791   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.264983   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:52.265182   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:52.265335   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:52.265362   74546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-824402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-824402/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-824402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:08:52.387305   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:08:52.387341   74546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3774/.minikube}
	I0812 12:08:52.387399   74546 buildroot.go:174] setting up certificates
	I0812 12:08:52.387410   74546 provision.go:84] configureAuth start
	I0812 12:08:52.387422   74546 main.go:141] libmachine: (bridge-824402) Calling .GetMachineName
	I0812 12:08:52.387733   74546 main.go:141] libmachine: (bridge-824402) Calling .GetIP
	I0812 12:08:52.391147   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.391533   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.391562   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.391827   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.394541   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.394861   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.394904   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.395050   74546 provision.go:143] copyHostCerts
	I0812 12:08:52.395114   74546 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem, removing ...
	I0812 12:08:52.395124   74546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem
	I0812 12:08:52.395200   74546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/ca.pem (1082 bytes)
	I0812 12:08:52.395299   74546 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem, removing ...
	I0812 12:08:52.395307   74546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem
	I0812 12:08:52.395333   74546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/cert.pem (1123 bytes)
	I0812 12:08:52.395401   74546 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem, removing ...
	I0812 12:08:52.395408   74546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem
	I0812 12:08:52.395433   74546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3774/.minikube/key.pem (1679 bytes)
	I0812 12:08:52.395497   74546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem org=jenkins.bridge-824402 san=[127.0.0.1 192.168.39.247 bridge-824402 localhost minikube]
	I0812 12:08:52.567862   74546 provision.go:177] copyRemoteCerts
	I0812 12:08:52.567921   74546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:08:52.567944   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.571398   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.572007   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.572042   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.572249   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:52.572440   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.572593   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:52.572719   74546 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa Username:docker}
	I0812 12:08:52.659498   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 12:08:52.683268   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:08:52.707056   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 12:08:52.729964   74546 provision.go:87] duration metric: took 342.541933ms to configureAuth
	I0812 12:08:52.729992   74546 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:08:52.730146   74546 config.go:182] Loaded profile config "bridge-824402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:08:52.730229   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:52.733080   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.733502   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:52.733531   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:52.733701   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:52.733972   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.734191   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:52.734370   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:52.734555   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:52.734708   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:52.734722   74546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:08:52.997992   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:08:52.998022   74546 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:08:52.998031   74546 main.go:141] libmachine: (bridge-824402) Calling .GetURL
	I0812 12:08:52.999333   74546 main.go:141] libmachine: (bridge-824402) DBG | Using libvirt version 6000000
	I0812 12:08:53.001630   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.002033   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.002076   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.002357   74546 main.go:141] libmachine: Docker is up and running!
	I0812 12:08:53.002370   74546 main.go:141] libmachine: Reticulating splines...
	I0812 12:08:53.002377   74546 client.go:171] duration metric: took 21.973951751s to LocalClient.Create
	I0812 12:08:53.002403   74546 start.go:167] duration metric: took 21.974022821s to libmachine.API.Create "bridge-824402"
	I0812 12:08:53.002415   74546 start.go:293] postStartSetup for "bridge-824402" (driver="kvm2")
	I0812 12:08:53.002443   74546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:08:53.002462   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:53.002708   74546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:08:53.002736   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:53.005235   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.005651   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.005679   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.005875   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:53.006075   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:53.006272   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:53.006524   74546 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa Username:docker}
	I0812 12:08:53.095717   74546 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:08:53.100137   74546 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:08:53.100168   74546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/addons for local assets ...
	I0812 12:08:53.100250   74546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3774/.minikube/files for local assets ...
	I0812 12:08:53.100369   74546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem -> 109272.pem in /etc/ssl/certs
	I0812 12:08:53.100503   74546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:08:53.110421   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:08:53.135366   74546 start.go:296] duration metric: took 132.936568ms for postStartSetup
	I0812 12:08:53.135414   74546 main.go:141] libmachine: (bridge-824402) Calling .GetConfigRaw
	I0812 12:08:53.136120   74546 main.go:141] libmachine: (bridge-824402) Calling .GetIP
	I0812 12:08:53.138909   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.139249   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.139272   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.139527   74546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/config.json ...
	I0812 12:08:53.139731   74546 start.go:128] duration metric: took 22.132318879s to createHost
	I0812 12:08:53.139755   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:53.141950   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.142226   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.142253   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.142372   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:53.142639   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:53.142813   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:53.142957   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:53.143082   74546 main.go:141] libmachine: Using SSH client type: native
	I0812 12:08:53.143243   74546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0812 12:08:53.143253   74546 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:08:53.254497   74546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464533.226915536
	
	I0812 12:08:53.254521   74546 fix.go:216] guest clock: 1723464533.226915536
	I0812 12:08:53.254531   74546 fix.go:229] Guest: 2024-08-12 12:08:53.226915536 +0000 UTC Remote: 2024-08-12 12:08:53.13974304 +0000 UTC m=+22.268306988 (delta=87.172496ms)
	I0812 12:08:53.254552   74546 fix.go:200] guest clock delta is within tolerance: 87.172496ms
	I0812 12:08:53.254557   74546 start.go:83] releasing machines lock for "bridge-824402", held for 22.247260229s
	I0812 12:08:53.254576   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:53.254874   74546 main.go:141] libmachine: (bridge-824402) Calling .GetIP
	I0812 12:08:53.257798   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.258270   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.258293   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.258469   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:53.259029   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:53.259226   74546 main.go:141] libmachine: (bridge-824402) Calling .DriverName
	I0812 12:08:53.259345   74546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:08:53.259415   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:53.259492   74546 ssh_runner.go:195] Run: cat /version.json
	I0812 12:08:53.259518   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHHostname
	I0812 12:08:53.262507   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.262687   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.262956   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.262995   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.263193   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:53.263216   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:53.263214   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:53.263399   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:53.263420   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHPort
	I0812 12:08:53.263588   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHKeyPath
	I0812 12:08:53.263591   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:53.263751   74546 main.go:141] libmachine: (bridge-824402) Calling .GetSSHUsername
	I0812 12:08:53.263747   74546 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa Username:docker}
	I0812 12:08:53.263897   74546 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/bridge-824402/id_rsa Username:docker}
	I0812 12:08:53.349985   74546 ssh_runner.go:195] Run: systemctl --version
	I0812 12:08:53.383932   74546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:08:53.550052   74546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:08:53.555918   74546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:08:53.555999   74546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:08:53.578105   74546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:08:53.578134   74546 start.go:495] detecting cgroup driver to use...
	I0812 12:08:53.578200   74546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:08:53.596916   74546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:08:53.611781   74546 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:08:53.611848   74546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:08:53.625725   74546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:08:53.640294   74546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:08:53.757852   74546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:08:53.927990   74546 docker.go:233] disabling docker service ...
	I0812 12:08:53.928078   74546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:08:53.942610   74546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:08:53.955297   74546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:08:54.092168   74546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:08:54.209875   74546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:08:54.224152   74546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:08:54.244270   74546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:08:54.244342   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.255151   74546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:08:54.255228   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.265528   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.276056   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.286638   74546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:08:54.297394   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.308719   74546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.326444   74546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:08:54.337155   74546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:08:54.347804   74546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:08:54.347871   74546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:08:54.361293   74546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:08:54.372468   74546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:08:54.502176   74546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:08:54.643511   74546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:08:54.643588   74546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:08:54.648623   74546 start.go:563] Will wait 60s for crictl version
	I0812 12:08:54.648684   74546 ssh_runner.go:195] Run: which crictl
	I0812 12:08:54.652973   74546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:08:54.692994   74546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:08:54.693087   74546 ssh_runner.go:195] Run: crio --version
	I0812 12:08:54.723092   74546 ssh_runner.go:195] Run: crio --version
	I0812 12:08:54.755331   74546 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:08:54.756703   74546 main.go:141] libmachine: (bridge-824402) Calling .GetIP
	I0812 12:08:54.759670   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:54.760043   74546 main.go:141] libmachine: (bridge-824402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:92:2b", ip: ""} in network mk-bridge-824402: {Iface:virbr4 ExpiryTime:2024-08-12 13:08:45 +0000 UTC Type:0 Mac:52:54:00:86:92:2b Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:bridge-824402 Clientid:01:52:54:00:86:92:2b}
	I0812 12:08:54.760065   74546 main.go:141] libmachine: (bridge-824402) DBG | domain bridge-824402 has defined IP address 192.168.39.247 and MAC address 52:54:00:86:92:2b in network mk-bridge-824402
	I0812 12:08:54.760361   74546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:08:54.764790   74546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:08:54.777881   74546 kubeadm.go:883] updating cluster {Name:bridge-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:bridge-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:08:54.778041   74546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:08:54.778092   74546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:08:54.811122   74546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:08:54.811211   74546 ssh_runner.go:195] Run: which lz4
	I0812 12:08:54.815524   74546 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:08:54.819837   74546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:08:54.819875   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:08:53.883473   71637 pod_ready.go:102] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"False"
	I0812 12:08:54.884556   71637 pod_ready.go:92] pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:54.884585   71637 pod_ready.go:81] duration metric: took 16.507596782s for pod "coredns-7db6d8ff4d-j6f9h" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.884595   71637 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.892695   71637 pod_ready.go:92] pod "etcd-flannel-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:54.892720   71637 pod_ready.go:81] duration metric: took 8.11919ms for pod "etcd-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.892729   71637 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.898641   71637 pod_ready.go:92] pod "kube-apiserver-flannel-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:54.898665   71637 pod_ready.go:81] duration metric: took 5.929186ms for pod "kube-apiserver-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.898678   71637 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.903559   71637 pod_ready.go:92] pod "kube-controller-manager-flannel-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:54.903583   71637 pod_ready.go:81] duration metric: took 4.896371ms for pod "kube-controller-manager-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.903596   71637 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-cslmr" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.909691   71637 pod_ready.go:92] pod "kube-proxy-cslmr" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:54.909712   71637 pod_ready.go:81] duration metric: took 6.10932ms for pod "kube-proxy-cslmr" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:54.909721   71637 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:55.282468   71637 pod_ready.go:92] pod "kube-scheduler-flannel-824402" in "kube-system" namespace has status "Ready":"True"
	I0812 12:08:55.282496   71637 pod_ready.go:81] duration metric: took 372.767513ms for pod "kube-scheduler-flannel-824402" in "kube-system" namespace to be "Ready" ...
	I0812 12:08:55.282510   71637 pod_ready.go:38] duration metric: took 16.914437217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:08:55.282527   71637 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:08:55.282590   71637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:08:55.306795   71637 api_server.go:72] duration metric: took 25.613805972s to wait for apiserver process to appear ...
	I0812 12:08:55.306824   71637 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:08:55.306843   71637 api_server.go:253] Checking apiserver healthz at https://192.168.61.135:8443/healthz ...
	I0812 12:08:55.313375   71637 api_server.go:279] https://192.168.61.135:8443/healthz returned 200:
	ok
	I0812 12:08:55.314615   71637 api_server.go:141] control plane version: v1.30.3
	I0812 12:08:55.314637   71637 api_server.go:131] duration metric: took 7.806721ms to wait for apiserver health ...
	I0812 12:08:55.314647   71637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:08:55.485357   71637 system_pods.go:59] 7 kube-system pods found
	I0812 12:08:55.485392   71637 system_pods.go:61] "coredns-7db6d8ff4d-j6f9h" [b6255852-4883-4533-9b2f-531a3e2a1a08] Running
	I0812 12:08:55.485403   71637 system_pods.go:61] "etcd-flannel-824402" [cf17f2b4-52f3-46b9-9069-92a2fc7333f7] Running
	I0812 12:08:55.485409   71637 system_pods.go:61] "kube-apiserver-flannel-824402" [1170f96c-53ce-4b6d-8c09-ea8b36c8d98c] Running
	I0812 12:08:55.485414   71637 system_pods.go:61] "kube-controller-manager-flannel-824402" [56c4d33f-0ff3-4b2b-a7ba-b8bb16883c5e] Running
	I0812 12:08:55.485419   71637 system_pods.go:61] "kube-proxy-cslmr" [1be98100-774a-40e9-8bcc-6d56ed4fcd1c] Running
	I0812 12:08:55.485423   71637 system_pods.go:61] "kube-scheduler-flannel-824402" [1e646433-5273-47be-8271-fee895f1f445] Running
	I0812 12:08:55.485427   71637 system_pods.go:61] "storage-provisioner" [f8eae065-49f9-4a97-803c-2d3a897d2234] Running
	I0812 12:08:55.485438   71637 system_pods.go:74] duration metric: took 170.781071ms to wait for pod list to return data ...
	I0812 12:08:55.485447   71637 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:08:55.681461   71637 default_sa.go:45] found service account: "default"
	I0812 12:08:55.681488   71637 default_sa.go:55] duration metric: took 196.034038ms for default service account to be created ...
	I0812 12:08:55.681496   71637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:08:55.884781   71637 system_pods.go:86] 7 kube-system pods found
	I0812 12:08:55.884809   71637 system_pods.go:89] "coredns-7db6d8ff4d-j6f9h" [b6255852-4883-4533-9b2f-531a3e2a1a08] Running
	I0812 12:08:55.884814   71637 system_pods.go:89] "etcd-flannel-824402" [cf17f2b4-52f3-46b9-9069-92a2fc7333f7] Running
	I0812 12:08:55.884819   71637 system_pods.go:89] "kube-apiserver-flannel-824402" [1170f96c-53ce-4b6d-8c09-ea8b36c8d98c] Running
	I0812 12:08:55.884824   71637 system_pods.go:89] "kube-controller-manager-flannel-824402" [56c4d33f-0ff3-4b2b-a7ba-b8bb16883c5e] Running
	I0812 12:08:55.884827   71637 system_pods.go:89] "kube-proxy-cslmr" [1be98100-774a-40e9-8bcc-6d56ed4fcd1c] Running
	I0812 12:08:55.884831   71637 system_pods.go:89] "kube-scheduler-flannel-824402" [1e646433-5273-47be-8271-fee895f1f445] Running
	I0812 12:08:55.884835   71637 system_pods.go:89] "storage-provisioner" [f8eae065-49f9-4a97-803c-2d3a897d2234] Running
	I0812 12:08:55.884841   71637 system_pods.go:126] duration metric: took 203.340216ms to wait for k8s-apps to be running ...
	I0812 12:08:55.884847   71637 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:08:55.884921   71637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:08:55.906708   71637 system_svc.go:56] duration metric: took 21.851328ms WaitForService to wait for kubelet
	I0812 12:08:55.906743   71637 kubeadm.go:582] duration metric: took 26.213757397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:08:55.906765   71637 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:08:56.081989   71637 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:08:56.082025   71637 node_conditions.go:123] node cpu capacity is 2
	I0812 12:08:56.082043   71637 node_conditions.go:105] duration metric: took 175.270908ms to run NodePressure ...
	I0812 12:08:56.082058   71637 start.go:241] waiting for startup goroutines ...
	I0812 12:08:56.082069   71637 start.go:246] waiting for cluster config update ...
	I0812 12:08:56.082084   71637 start.go:255] writing updated cluster config ...
	I0812 12:08:56.082426   71637 ssh_runner.go:195] Run: rm -f paused
	I0812 12:08:56.149925   71637 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 12:08:56.151930   71637 out.go:177] * Done! kubectl is now configured to use "flannel-824402" cluster and "default" namespace by default
	I0812 12:08:56.145705   74546 crio.go:462] duration metric: took 1.330238844s to copy over tarball
	I0812 12:08:56.145788   74546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:08:58.543209   74546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.397390422s)
	I0812 12:08:58.543241   74546 crio.go:469] duration metric: took 2.397511526s to extract the tarball
	I0812 12:08:58.543266   74546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:08:58.585169   74546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:08:58.627183   74546 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:08:58.627207   74546 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:08:58.627214   74546 kubeadm.go:934] updating node { 192.168.39.247 8443 v1.30.3 crio true true} ...
	I0812 12:08:58.627317   74546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-824402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:bridge-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0812 12:08:58.627378   74546 ssh_runner.go:195] Run: crio config
	I0812 12:08:58.672034   74546 cni.go:84] Creating CNI manager for "bridge"
	I0812 12:08:58.672056   74546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:08:58.672074   74546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-824402 NodeName:bridge-824402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:08:58.672192   74546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-824402"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:08:58.672260   74546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:08:58.681957   74546 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:08:58.682030   74546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:08:58.691407   74546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 12:08:58.707686   74546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:08:58.724604   74546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 12:08:58.741380   74546 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0812 12:08:58.745056   74546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:08:58.756818   74546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:08:58.882922   74546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:08:58.901669   74546 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402 for IP: 192.168.39.247
	I0812 12:08:58.901698   74546 certs.go:194] generating shared ca certs ...
	I0812 12:08:58.901712   74546 certs.go:226] acquiring lock for ca certs: {Name:mkab0b83a49d5695bc804bf960b468b7a47f73f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:58.901878   74546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key
	I0812 12:08:58.901930   74546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key
	I0812 12:08:58.901944   74546 certs.go:256] generating profile certs ...
	I0812 12:08:58.902005   74546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.key
	I0812 12:08:58.902022   74546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.crt with IP's: []
	I0812 12:08:59.036644   74546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.crt ...
	I0812 12:08:59.036676   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.crt: {Name:mk3e68a8ad4053f38920451748882a1b108245d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.036898   74546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.key ...
	I0812 12:08:59.036918   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/client.key: {Name:mkefc4a97133c408b7cc387ef35279b33526a511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.037033   74546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key.5c514c06
	I0812 12:08:59.037059   74546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt.5c514c06 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.247]
	I0812 12:08:59.496340   74546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt.5c514c06 ...
	I0812 12:08:59.496374   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt.5c514c06: {Name:mk6b1da0777de165e662f0f8b28c23d82d32fb4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.496581   74546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key.5c514c06 ...
	I0812 12:08:59.496602   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key.5c514c06: {Name:mke8ab3e9c941092c66c1657faac05e7282cb9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.496690   74546 certs.go:381] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt.5c514c06 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt
	I0812 12:08:59.496776   74546 certs.go:385] copying /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key.5c514c06 -> /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key
	I0812 12:08:59.496829   74546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.key
	I0812 12:08:59.496843   74546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.crt with IP's: []
	I0812 12:08:59.783188   74546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.crt ...
	I0812 12:08:59.783222   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.crt: {Name:mkab9a4f496d98c3a9f910b6ef0b549b6a7c6141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.783408   74546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.key ...
	I0812 12:08:59.783425   74546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.key: {Name:mk8586aa995ef902af23e4bee5ce88ebec1ba7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:08:59.783656   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem (1338 bytes)
	W0812 12:08:59.783697   74546 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927_empty.pem, impossibly tiny 0 bytes
	I0812 12:08:59.783703   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 12:08:59.783724   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/ca.pem (1082 bytes)
	I0812 12:08:59.783750   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:08:59.783773   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/certs/key.pem (1679 bytes)
	I0812 12:08:59.783808   74546 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem (1708 bytes)
	I0812 12:08:59.784491   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:08:59.830821   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:08:59.866881   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:08:59.891392   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:08:59.916944   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 12:08:59.942744   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:08:59.968838   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:08:59.993328   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/bridge-824402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:09:00.018239   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/ssl/certs/109272.pem --> /usr/share/ca-certificates/109272.pem (1708 bytes)
	I0812 12:09:00.045145   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:09:00.070646   74546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3774/.minikube/certs/10927.pem --> /usr/share/ca-certificates/10927.pem (1338 bytes)
	I0812 12:09:00.094463   74546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:09:00.111237   74546 ssh_runner.go:195] Run: openssl version
	I0812 12:09:00.116992   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10927.pem && ln -fs /usr/share/ca-certificates/10927.pem /etc/ssl/certs/10927.pem"
	I0812 12:09:00.127723   74546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10927.pem
	I0812 12:09:00.132299   74546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:33 /usr/share/ca-certificates/10927.pem
	I0812 12:09:00.132358   74546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10927.pem
	I0812 12:09:00.138442   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10927.pem /etc/ssl/certs/51391683.0"
	I0812 12:09:00.150843   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109272.pem && ln -fs /usr/share/ca-certificates/109272.pem /etc/ssl/certs/109272.pem"
	I0812 12:09:00.163882   74546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109272.pem
	I0812 12:09:00.168369   74546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:33 /usr/share/ca-certificates/109272.pem
	I0812 12:09:00.168436   74546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109272.pem
	I0812 12:09:00.174218   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109272.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:09:00.185796   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:09:00.197266   74546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:09:00.202061   74546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:09:00.202121   74546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:09:00.207677   74546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:09:00.218617   74546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:09:00.223069   74546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:09:00.223122   74546 kubeadm.go:392] StartCluster: {Name:bridge-824402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:bridge-824402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:09:00.223189   74546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:09:00.223258   74546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:09:00.259336   74546 cri.go:89] found id: ""
	I0812 12:09:00.259417   74546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:09:00.269436   74546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:09:00.279713   74546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:09:00.290176   74546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:09:00.290195   74546 kubeadm.go:157] found existing configuration files:
	
	I0812 12:09:00.290234   74546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:09:00.299518   74546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:09:00.299574   74546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:09:00.309169   74546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:09:00.318327   74546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:09:00.318410   74546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:09:00.328974   74546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:09:00.339717   74546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:09:00.339777   74546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:09:00.352494   74546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:09:00.363252   74546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:09:00.363333   74546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:09:00.373697   74546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:09:00.571235   74546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.002090528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1d85169-348c-425a-9e8f-dce16eb04dbd name=/runtime.v1.RuntimeService/Version
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.003121304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bf9c6d6-86fb-4a33-8afc-67da081a1d02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.004153184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464550004121179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bf9c6d6-86fb-4a33-8afc-67da081a1d02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.004830472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39638889-75ce-444b-b359-370255009c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.004907860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39638889-75ce-444b-b359-370255009c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.005237684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39638889-75ce-444b-b359-370255009c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.014865073Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=98190f35-8524-4ace-8777-76281f54285d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.015583790Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-86flr,Uid:703201f6-ba92-45f7-b273-ee508cf51e2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463587022780606,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T11:52:51.161753453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&PodSandboxMetadata{Name:busybox,Uid:4930c51e-a227-4742-b74a-669e9bea4e75,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1723463587022224334,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T11:52:51.161759896Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:840e4165628b9752ce8dacaabe83891617956c556ef0dd34e0660e4768d638ab,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-wcpgl,Uid:11f6c813-ebc1-4712-b758-cb08ff921d77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463579221079676,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-wcpgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f6c813-ebc1-4712-b758-cb08ff921d77,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12
T11:52:51.161757342Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:93affc3b-a4e7-4c19-824c-3eec33616acc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463571481404914,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T11:52:51.161758521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&PodSandboxMetadata{Name:kube-proxy-h6fzz,Uid:b0f6bcc8-263a-4b23-a60b-c67475a868bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463571479640799,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-263a-4b23-a60b-c67475a868bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-08-12T11:52:51.161761983Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-581883,Uid:9e4b581148cc79b5d3e65b07cdee767f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463563228390051,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79b5d3e65b07cdee767f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.114:2379,kubernetes.io/config.hash: 9e4b581148cc79b5d3e65b07cdee767f,kubernetes.io/config.seen: 2024-08-12T11:52:29.118415304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&PodSandboxMetadata{Name:
kube-controller-manager-default-k8s-diff-port-581883,Uid:87e28fa37bca7211058973f16ff6cce0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463529640654166,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87e28fa37bca7211058973f16ff6cce0,kubernetes.io/config.seen: 2024-08-12T11:52:09.126843955Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-581883,Uid:3f17375bf38b45aef044822c815b92ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463529632949765,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef044822c815b92ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3f17375bf38b45aef044822c815b92ea,kubernetes.io/config.seen: 2024-08-12T11:52:09.126835366Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-581883,Uid:941ee3e5ebd2b0c2d10426764fced5cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723463529628080173,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2d10426764fced5cd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-a
ddress.endpoint: 192.168.50.114:8444,kubernetes.io/config.hash: 941ee3e5ebd2b0c2d10426764fced5cd,kubernetes.io/config.seen: 2024-08-12T11:52:09.126842044Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=98190f35-8524-4ace-8777-76281f54285d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.016920826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64328e8c-3474-4519-a7f2-67e121ad3fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.017001097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64328e8c-3474-4519-a7f2-67e121ad3fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.017389303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64328e8c-3474-4519-a7f2-67e121ad3fa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.064820068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dd9b467-e16e-4d04-bd1b-8274890cf976 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.064904197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dd9b467-e16e-4d04-bd1b-8274890cf976 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.065898399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c09536c-0ffd-4c7d-aee4-fdcf2b553ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.066287930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464550066265015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c09536c-0ffd-4c7d-aee4-fdcf2b553ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.066923664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=680157cb-3127-4f8f-941a-67ba489f661c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.066997859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=680157cb-3127-4f8f-941a-67ba489f661c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.067363906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=680157cb-3127-4f8f-941a-67ba489f661c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.111114196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7fd1658-8aad-4524-8495-b3c01f55e871 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.111201517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7fd1658-8aad-4524-8495-b3c01f55e871 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.112815402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbe12264-246d-45d2-9e9f-f75daa692cdb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.113251623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723464550113225865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbe12264-246d-45d2-9e9f-f75daa692cdb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.113920618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fda52ae-df21-4253-aae6-beb962c127ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.113981538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fda52ae-df21-4253-aae6-beb962c127ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:09:10 default-k8s-diff-port-581883 crio[736]: time="2024-08-12 12:09:10.114253211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8570fb2a8fc3fdbfe7cea08441468023cc8cee013e33a66bb26c807bfa1563dd,PodSandboxId:e6452c0888bf73fdeb682033a0ec7a4c5da745fb4d903d3e2416119d5a39d742,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723463590190840752,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4930c51e-a227-4742-b74a-669e9bea4e75,},Annotations:map[string]string{io.kubernetes.container.hash: acf9d8f0,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4,PodSandboxId:d99458c08ab379c3e3f66d398bbb2c370cd87ade4b9181c4d7b6d1c5e0f25b15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723463587302117092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-86flr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703201f6-ba92-45f7-b273-ee508cf51e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96632d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":
\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c,PodSandboxId:27a19bbbd58972fd4696c66e26d8f982707a3730dc4e7fcb651e17e4c68af1b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723463585227850623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 93affc3b-a4e7-4c19-824c-3eec33616acc,},Annotations:map[string]string{io.kubernetes.container.hash: 60a22b4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26,PodSandboxId:c2130f142c3ea6bfa2b183e340f8a8a5a2d67275ec8aeb812a88fc5fb23cea01,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723463583231795510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6fzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f6bcc8-26
3a-4b23-a60b-c67475a868bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9f59257e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723463570380065596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126,PodSandboxId:ae96fed1fe4ba01bdf70ed821b3613e7827855ba051ab64629af25dc31a425bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723463563311742133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4b581148cc79
b5d3e65b07cdee767f,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf4fd88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723463552309598415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941ee3e5ebd2b0c2
d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804,PodSandboxId:ea5be7e9df4058dc6ba9d858451a0f9020e35db6b685af4cadc11029c67de56f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723463531011469465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17375bf38b45aef0
44822c815b92ea,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f,PodSandboxId:337327ca5a4bbfb704b1dd1eb4debcbc3a184aaf3faea59a8e0d98e868e3bf2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723463531016562967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 87e28fa37bca7211058973f16ff6cce0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1,PodSandboxId:32667d56a4852941322a4b385a46d64306e70dc84439d329186b824bae374608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723463530981775319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-581883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 941ee3e5ebd2b0c2d10426764fced5cd,},Annotations:map[string]string{io.kubernetes.container.hash: d5de8ef9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fda52ae-df21-4253-aae6-beb962c127ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8570fb2a8fc3f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   16 minutes ago      Running             busybox                   1                   e6452c0888bf7       busybox
	72cbd6f9c7cd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Running             coredns                   1                   d99458c08ab37       coredns-7db6d8ff4d-86flr
	3cd0b00766504       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       1                   27a19bbbd5897       storage-provisioner
	b283882c75248       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Running             kube-proxy                1                   c2130f142c3ea       kube-proxy-h6fzz
	b4740bb15a741       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      16 minutes ago      Running             kube-controller-manager   2                   337327ca5a4bb       kube-controller-manager-default-k8s-diff-port-581883
	a8c6a879fccb9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Running             etcd                      1                   ae96fed1fe4ba       etcd-default-k8s-diff-port-581883
	87bb668a8df7c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      16 minutes ago      Running             kube-apiserver            2                   32667d56a4852       kube-apiserver-default-k8s-diff-port-581883
	f182f5e4cb38c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      16 minutes ago      Exited              kube-controller-manager   1                   337327ca5a4bb       kube-controller-manager-default-k8s-diff-port-581883
	3fac62c7d9a1c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Running             kube-scheduler            1                   ea5be7e9df405       kube-scheduler-default-k8s-diff-port-581883
	399d65bf1849f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      16 minutes ago      Exited              kube-apiserver            1                   32667d56a4852       kube-apiserver-default-k8s-diff-port-581883
	
	
	==> coredns [72cbd6f9c7cd4044e14b68d8737bb519c2644a737e3a90cbd1793735b6d0a8b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41679 - 22938 "HINFO IN 3970124945216707387.6801956301757465445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012453193s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-581883
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-581883
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=default-k8s-diff-port-581883
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T11_43_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 11:43:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-581883
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:09:06 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:09:06 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:09:06 +0000   Mon, 12 Aug 2024 11:43:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:09:06 +0000   Mon, 12 Aug 2024 11:53:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    default-k8s-diff-port-581883
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4246217d28ad450d8bacd3ae2138cfc0
	  System UUID:                4246217d-28ad-450d-8bac-d3ae2138cfc0
	  Boot ID:                    4bc71395-9c86-4364-b112-0ee5bb52e581
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-86flr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-default-k8s-diff-port-581883                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kube-apiserver-default-k8s-diff-port-581883             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-581883    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-h6fzz                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-default-k8s-diff-port-581883             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 metrics-server-569cc877fc-wcpgl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientPID     25m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeReady                25m                kubelet          Node default-k8s-diff-port-581883 status is now: NodeReady
	  Normal  RegisteredNode           25m                node-controller  Node default-k8s-diff-port-581883 event: Registered Node default-k8s-diff-port-581883 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-581883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-581883 event: Registered Node default-k8s-diff-port-581883 in Controller
	
	
	==> dmesg <==
	[Aug12 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039717] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779994] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.890829] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug12 11:52] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.057041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064201] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.197045] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.125506] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.310748] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +4.202306] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +1.799417] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.072858] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.688538] kauditd_printk_skb: 59 callbacks suppressed
	[ +34.771635] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +0.115144] kauditd_printk_skb: 5 callbacks suppressed
	[Aug12 11:53] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.277082] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [a8c6a879fccb9f4216af2a531967c99c7db5156d770a53cf5a660c691c1ad126] <==
	{"level":"warn","ts":"2024-08-12T12:05:03.47584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.667947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:05:03.475894Z","caller":"traceutil/trace.go:171","msg":"trace[1653121073] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1196; }","duration":"232.74916ms","start":"2024-08-12T12:05:03.243135Z","end":"2024-08-12T12:05:03.475884Z","steps":["trace[1653121073] 'agreement among raft nodes before linearized reading'  (duration: 232.677185ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.533894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:05:03.476093Z","caller":"traceutil/trace.go:171","msg":"trace[235945458] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1196; }","duration":"340.608648ms","start":"2024-08-12T12:05:03.135475Z","end":"2024-08-12T12:05:03.476084Z","steps":["trace[235945458] 'agreement among raft nodes before linearized reading'  (duration: 340.548499ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:03.135458Z","time spent":"340.677628ms","remote":"127.0.0.1:34698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true "}
	{"level":"info","ts":"2024-08-12T12:05:03.475254Z","caller":"traceutil/trace.go:171","msg":"trace[1730288780] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1196; }","duration":"450.907345ms","start":"2024-08-12T12:05:03.02434Z","end":"2024-08-12T12:05:03.475247Z","steps":["trace[1730288780] 'agreement among raft nodes before linearized reading'  (duration: 450.834158ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:03.476388Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:05:03.024278Z","time spent":"452.095022ms","remote":"127.0.0.1:34982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":0,"response size":28,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-08-12T12:05:25.604787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.277517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-08-12T12:05:25.604923Z","caller":"traceutil/trace.go:171","msg":"trace[1421475193] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1214; }","duration":"129.427993ms","start":"2024-08-12T12:05:25.475483Z","end":"2024-08-12T12:05:25.604911Z","steps":["trace[1421475193] 'range keys from in-memory index tree'  (duration: 129.165878ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:05:25.749207Z","caller":"traceutil/trace.go:171","msg":"trace[1515799042] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"139.784632ms","start":"2024-08-12T12:05:25.609403Z","end":"2024-08-12T12:05:25.749187Z","steps":["trace[1515799042] 'process raft request'  (duration: 139.639053ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:05:50.851523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.569303ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3883669985372455496 > lease_revoke:<id:35e591466ef8c1f7>","response":"size:28"}
	{"level":"info","ts":"2024-08-12T12:06:32.593484Z","caller":"traceutil/trace.go:171","msg":"trace[80340591] transaction","detail":"{read_only:false; response_revision:1267; number_of_response:1; }","duration":"187.353428ms","start":"2024-08-12T12:06:32.406088Z","end":"2024-08-12T12:06:32.593442Z","steps":["trace[80340591] 'process raft request'  (duration: 187.163958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:07:13.057979Z","caller":"traceutil/trace.go:171","msg":"trace[328870341] transaction","detail":"{read_only:false; response_revision:1299; number_of_response:1; }","duration":"246.002546ms","start":"2024-08-12T12:07:12.811962Z","end":"2024-08-12T12:07:13.057965Z","steps":["trace[328870341] 'process raft request'  (duration: 245.892421ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:07:37.407112Z","caller":"traceutil/trace.go:171","msg":"trace[403359245] transaction","detail":"{read_only:false; response_revision:1318; number_of_response:1; }","duration":"221.796554ms","start":"2024-08-12T12:07:37.185282Z","end":"2024-08-12T12:07:37.407078Z","steps":["trace[403359245] 'process raft request'  (duration: 221.68578ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:07:37.407562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.108458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-12T12:07:37.407617Z","caller":"traceutil/trace.go:171","msg":"trace[668591283] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1318; }","duration":"191.320846ms","start":"2024-08-12T12:07:37.216284Z","end":"2024-08-12T12:07:37.407605Z","steps":["trace[668591283] 'agreement among raft nodes before linearized reading'  (duration: 191.081697ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:07:37.407286Z","caller":"traceutil/trace.go:171","msg":"trace[1426419433] linearizableReadLoop","detail":"{readStateIndex:1531; appliedIndex:1531; }","duration":"190.893686ms","start":"2024-08-12T12:07:37.216376Z","end":"2024-08-12T12:07:37.40727Z","steps":["trace[1426419433] 'read index received'  (duration: 190.886807ms)","trace[1426419433] 'applied index is now lower than readState.Index'  (duration: 5.636µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:07:37.411707Z","caller":"traceutil/trace.go:171","msg":"trace[575012394] transaction","detail":"{read_only:false; response_revision:1319; number_of_response:1; }","duration":"155.745781ms","start":"2024-08-12T12:07:37.255901Z","end":"2024-08-12T12:07:37.411647Z","steps":["trace[575012394] 'process raft request'  (duration: 155.662517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:07:40.698468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.442724ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3883669985372456030 > lease_revoke:<id:35e591466ef8c412>","response":"size:28"}
	{"level":"info","ts":"2024-08-12T12:07:48.877633Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1086}
	{"level":"info","ts":"2024-08-12T12:07:48.881592Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1086,"took":"3.444863ms","hash":3554669601,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-12T12:07:48.881679Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3554669601,"revision":1086,"compact-revision":844}
	{"level":"warn","ts":"2024-08-12T12:09:00.97366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.694447ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3883669985372456423 > lease_revoke:<id:35e591466ef8c59b>","response":"size:28"}
	{"level":"info","ts":"2024-08-12T12:09:01.25517Z","caller":"traceutil/trace.go:171","msg":"trace[609218826] transaction","detail":"{read_only:false; response_revision:1387; number_of_response:1; }","duration":"106.11471ms","start":"2024-08-12T12:09:01.149022Z","end":"2024-08-12T12:09:01.255137Z","steps":["trace[609218826] 'process raft request'  (duration: 105.799206ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:09:05.560178Z","caller":"traceutil/trace.go:171","msg":"trace[422213739] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"129.967776ms","start":"2024-08-12T12:09:05.430188Z","end":"2024-08-12T12:09:05.560156Z","steps":["trace[422213739] 'process raft request'  (duration: 63.350051ms)","trace[422213739] 'compare'  (duration: 66.50248ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:09:10 up 17 min,  0 users,  load average: 0.57, 0.39, 0.23
	Linux default-k8s-diff-port-581883 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [399d65bf1849fd5a00697ed93dd76cb98c64593f7f30a95b738fa8421a290da1] <==
	I0812 11:52:11.315151       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0812 11:52:11.819648       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:11.822423       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0812 11:52:11.822520       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 11:52:11.824900       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 11:52:11.828386       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 11:52:11.828481       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 11:52:11.828675       1 instance.go:299] Using reconciler: lease
	W0812 11:52:11.829468       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.823054       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.823106       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:12.829732       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.185979       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.217583       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:14.400831       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.571225       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.790434       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:16.902027       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.374345       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.586247       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:20.617152       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:26.158154       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:27.533948       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0812 11:52:27.871114       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0812 11:52:31.829393       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [87bb668a8df7c81212dc6c3b58ec0e2b86b1907ec0a993f94860d21417454b98] <==
	Trace[1787800846]: [587.831794ms] [587.831794ms] END
	W0812 12:05:51.252580       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:05:51.252663       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:05:51.252673       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:05:51.253684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:05:51.253766       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:05:51.253774       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:07:50.257877       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:07:50.257996       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0812 12:07:51.258172       1 handler_proxy.go:93] no RequestInfo found in the context
	W0812 12:07:51.258427       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:07:51.258481       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0812 12:07:51.258567       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:07:51.258568       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0812 12:07:51.259743       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:08:51.258949       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:08:51.259185       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0812 12:08:51.259222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0812 12:08:51.260368       1 handler_proxy.go:93] no RequestInfo found in the context
	E0812 12:08:51.260462       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0812 12:08:51.260489       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4740bb15a741a867707024cc83d9bd143f8e9b503812d821da06e6587a5959f] <==
	I0812 12:03:56.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="273.867µs"
	E0812 12:04:06.707153       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:07.195915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:04:07.229425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="50.146µs"
	E0812 12:04:36.711952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:04:37.204245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:05:06.718493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:05:07.218050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:05:36.723144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:05:37.226159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:06:06.727945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:06:07.233082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:06:36.733725       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:06:37.242878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:07:06.739125       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:07:07.253180       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:07:36.744858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:07:37.263288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:08:06.751254       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:08:07.271219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:08:36.756616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:08:37.279728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0812 12:09:06.762717       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0812 12:09:07.294839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0812 12:09:08.226450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.456171ms"
	
	
	==> kube-controller-manager [f182f5e4cb38c844f239669378967ea92c4cd07ab9f2a8bcf3fa159140d0dd1f] <==
	I0812 11:52:11.685115       1 serving.go:380] Generated self-signed cert in-memory
	I0812 11:52:12.154484       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 11:52:12.154521       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:52:12.156099       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 11:52:12.156203       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 11:52:12.156744       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0812 11:52:12.156814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0812 11:52:50.172622       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [b283882c75248212de2ce4798d966bec4aad8cdf2b055b10e87624cf4dd49e26] <==
	I0812 11:53:03.437451       1 server_linux.go:69] "Using iptables proxy"
	I0812 11:53:03.457354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.114"]
	I0812 11:53:03.494924       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 11:53:03.494971       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 11:53:03.494987       1 server_linux.go:165] "Using iptables Proxier"
	I0812 11:53:03.497896       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 11:53:03.498188       1 server.go:872] "Version info" version="v1.30.3"
	I0812 11:53:03.498607       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 11:53:03.502384       1 config.go:192] "Starting service config controller"
	I0812 11:53:03.503087       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 11:53:03.503665       1 config.go:101] "Starting endpoint slice config controller"
	I0812 11:53:03.503768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 11:53:03.504417       1 config.go:319] "Starting node config controller"
	I0812 11:53:03.505567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 11:53:03.603932       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 11:53:03.604089       1 shared_informer.go:320] Caches are synced for service config
	I0812 11:53:03.605977       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fac62c7d9a1c7df93b68e7814a671f955b5ec186d2a3bdeb61f453542cb6804] <==
	W0812 11:52:50.228555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 11:52:50.228670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 11:52:50.228866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 11:52:50.228950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0812 11:52:50.229139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 11:52:50.229176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 11:52:50.229380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 11:52:50.229458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 11:52:50.229637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 11:52:50.229730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 11:52:50.230000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 11:52:50.231377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 11:52:50.231655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 11:52:50.231751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 11:52:50.231978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 11:52:50.232080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 11:52:50.232185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:52:50.232265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.232385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.232463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 11:52:50.233761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.233846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 11:52:50.234078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 11:52:50.236383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0812 11:52:51.439422       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:07:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:07:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:07:21 default-k8s-diff-port-581883 kubelet[947]: E0812 12:07:21.212821     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:07:32 default-k8s-diff-port-581883 kubelet[947]: E0812 12:07:32.211993     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:07:44 default-k8s-diff-port-581883 kubelet[947]: E0812 12:07:44.211984     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:07:56 default-k8s-diff-port-581883 kubelet[947]: E0812 12:07:56.212544     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:08:07 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:07.212968     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:08:09 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:09.229577     947 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:08:09 default-k8s-diff-port-581883 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:08:09 default-k8s-diff-port-581883 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:08:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:08:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:08:19 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:19.213043     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:08:32 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:32.214071     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:08:44 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:44.212470     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:08:55 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:55.231531     947 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 12 12:08:55 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:55.231635     947 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 12 12:08:55 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:55.232494     947 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb5vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-wcpgl_kube-system(11f6c813-ebc1-4712-b758-cb08ff921d77): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 12 12:08:55 default-k8s-diff-port-581883 kubelet[947]: E0812 12:08:55.232707     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:09:08 default-k8s-diff-port-581883 kubelet[947]: E0812 12:09:08.212587     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wcpgl" podUID="11f6c813-ebc1-4712-b758-cb08ff921d77"
	Aug 12 12:09:09 default-k8s-diff-port-581883 kubelet[947]: E0812 12:09:09.235891     947 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:09:09 default-k8s-diff-port-581883 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:09:09 default-k8s-diff-port-581883 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:09:09 default-k8s-diff-port-581883 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:09:09 default-k8s-diff-port-581883 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3cd0b00766504d1d0927339f0cb9059c7e0632a10420c1ec4be8e5656038637c] <==
	I0812 11:53:05.339792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 11:53:05.361408       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 11:53:05.361587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 11:53:22.763758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 11:53:22.764646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214!
	I0812 11:53:22.764330       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb3f6e99-3d75-4ff9-a114-2b4261bc75e7", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214 became leader
	I0812 11:53:22.865253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-581883_44043e73-9db8-4432-9357-42746608f214!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wcpgl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl: exit status 1 (65.389039ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wcpgl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-581883 describe pod metrics-server-569cc877fc-wcpgl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (167.08s)

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 16.13
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 12.28
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.57
31 TestOffline 100.09
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 142.65
40 TestAddons/serial/GCPAuth/Namespaces 0.16
42 TestAddons/parallel/Registry 16.93
44 TestAddons/parallel/InspektorGadget 12.14
46 TestAddons/parallel/HelmTiller 11.4
48 TestAddons/parallel/CSI 79.03
49 TestAddons/parallel/Headlamp 23.77
50 TestAddons/parallel/CloudSpanner 5.6
51 TestAddons/parallel/LocalPath 55.39
52 TestAddons/parallel/NvidiaDevicePlugin 5.65
53 TestAddons/parallel/Yakd 11.93
55 TestCertOptions 81.72
56 TestCertExpiration 250.46
58 TestForceSystemdFlag 71.25
59 TestForceSystemdEnv 60.18
61 TestKVMDriverInstallOrUpdate 3.88
65 TestErrorSpam/setup 39.55
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.53
69 TestErrorSpam/unpause 1.56
70 TestErrorSpam/stop 4.6
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.63
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 34.75
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
82 TestFunctional/serial/CacheCmd/cache/add_local 2.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 51.75
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.38
93 TestFunctional/serial/LogsFileCmd 1.38
94 TestFunctional/serial/InvalidService 4.68
96 TestFunctional/parallel/ConfigCmd 0.3
97 TestFunctional/parallel/DashboardCmd 13.28
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.95
104 TestFunctional/parallel/ServiceCmdConnect 18.67
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 49.21
108 TestFunctional/parallel/SSHCmd 0.37
109 TestFunctional/parallel/CpCmd 1.19
110 TestFunctional/parallel/MySQL 24.1
111 TestFunctional/parallel/FileSync 0.19
112 TestFunctional/parallel/CertSync 1.18
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
120 TestFunctional/parallel/License 0.56
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.6
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.26
128 TestFunctional/parallel/ImageCommands/Setup 1.85
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
142 TestFunctional/parallel/MountCmd/any-port 17
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.02
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.58
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.4
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
149 TestFunctional/parallel/MountCmd/specific-port 1.93
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
151 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
153 TestFunctional/parallel/ProfileCmd/profile_list 0.31
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
155 TestFunctional/parallel/ServiceCmd/List 1.28
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.26
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
158 TestFunctional/parallel/ServiceCmd/Format 0.29
159 TestFunctional/parallel/ServiceCmd/URL 0.27
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 218.08
167 TestMultiControlPlane/serial/DeployApp 6.19
168 TestMultiControlPlane/serial/PingHostFromPods 1.25
169 TestMultiControlPlane/serial/AddWorkerNode 53.17
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
172 TestMultiControlPlane/serial/CopyFile 12.68
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.05
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 354.24
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 80.46
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 55.08
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.72
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.6
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.7
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 86.83
220 TestMountStart/serial/StartWithMountFirst 27.22
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 31.04
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.73
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 2.28
227 TestMountStart/serial/RestartStopped 22.95
228 TestMountStart/serial/VerifyMountPostStop 0.38
231 TestMultiNode/serial/FreshStart2Nodes 122.27
232 TestMultiNode/serial/DeployApp2Nodes 5.57
233 TestMultiNode/serial/PingHostFrom2Pods 0.8
234 TestMultiNode/serial/AddNode 48.19
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.21
238 TestMultiNode/serial/StopNode 2.23
239 TestMultiNode/serial/StartAfterStop 39.16
241 TestMultiNode/serial/DeleteNode 2.34
243 TestMultiNode/serial/RestartMultiNode 181.61
244 TestMultiNode/serial/ValidateNameConflict 46.49
251 TestScheduledStopUnix 115.01
255 TestRunningBinaryUpgrade 212.88
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 89.72
262 TestNoKubernetes/serial/StartWithStopK8s 23.3
263 TestStoppedBinaryUpgrade/Setup 2.27
264 TestStoppedBinaryUpgrade/Upgrade 113.99
265 TestNoKubernetes/serial/Start 36.87
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
267 TestNoKubernetes/serial/ProfileList 2.36
268 TestNoKubernetes/serial/Stop 1.41
269 TestNoKubernetes/serial/StartNoArgs 23.85
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
285 TestNetworkPlugins/group/false 3.14
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
291 TestPause/serial/Start 120.71
294 TestPause/serial/SecondStartNoReconfiguration 40.79
295 TestPause/serial/Pause 0.67
296 TestPause/serial/VerifyStatus 0.23
297 TestPause/serial/Unpause 0.63
298 TestPause/serial/PauseAgain 0.78
299 TestPause/serial/DeletePaused 1.02
300 TestPause/serial/VerifyDeletedResources 0.46
302 TestStartStop/group/embed-certs/serial/FirstStart 58.11
304 TestStartStop/group/no-preload/serial/FirstStart 108.81
305 TestStartStop/group/embed-certs/serial/DeployApp 10.29
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
308 TestStartStop/group/no-preload/serial/DeployApp 9.29
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
314 TestStartStop/group/embed-certs/serial/SecondStart 679.85
315 TestStartStop/group/old-k8s-version/serial/Stop 5.3
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 304.96
321 TestStartStop/group/no-preload/serial/SecondStart 603.61
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.32
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.15
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 622.31
335 TestStartStop/group/newest-cni/serial/FirstStart 48.64
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
338 TestStartStop/group/newest-cni/serial/Stop 10.6
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 57.75
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
344 TestStartStop/group/newest-cni/serial/Pause 4.1
345 TestNetworkPlugins/group/auto/Start 101.56
346 TestNetworkPlugins/group/kindnet/Start 87.33
347 TestNetworkPlugins/group/calico/Start 108.37
348 TestNetworkPlugins/group/auto/KubeletFlags 0.21
349 TestNetworkPlugins/group/auto/NetCatPod 10.25
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
352 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
353 TestNetworkPlugins/group/auto/DNS 0.23
354 TestNetworkPlugins/group/auto/Localhost 0.2
355 TestNetworkPlugins/group/auto/HairPin 0.14
357 TestNetworkPlugins/group/kindnet/DNS 0.18
358 TestNetworkPlugins/group/kindnet/Localhost 0.17
359 TestNetworkPlugins/group/kindnet/HairPin 0.17
360 TestNetworkPlugins/group/custom-flannel/Start 81.34
361 TestNetworkPlugins/group/enable-default-cni/Start 81.1
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.2
364 TestNetworkPlugins/group/calico/NetCatPod 11.22
365 TestNetworkPlugins/group/calico/DNS 0.17
366 TestNetworkPlugins/group/calico/Localhost 0.13
367 TestNetworkPlugins/group/calico/HairPin 0.13
368 TestNetworkPlugins/group/flannel/Start 93.27
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.32
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
379 TestNetworkPlugins/group/bridge/Start 94.93
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
382 TestNetworkPlugins/group/flannel/NetCatPod 11.23
383 TestNetworkPlugins/group/flannel/DNS 0.15
384 TestNetworkPlugins/group/flannel/Localhost 0.13
385 TestNetworkPlugins/group/flannel/HairPin 0.14
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
387 TestNetworkPlugins/group/bridge/NetCatPod 11.21
388 TestNetworkPlugins/group/bridge/DNS 0.18
389 TestNetworkPlugins/group/bridge/Localhost 0.12
390 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (26.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-821507 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-821507 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.739066193s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-821507
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-821507: exit status 85 (58.001433ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-821507 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |          |
	|         | -p download-only-821507        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:10.914713   10939 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:10.914950   10939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:10.914960   10939 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:10.914964   10939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:10.915170   10939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	W0812 10:20:10.915280   10939 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19409-3774/.minikube/config/config.json: open /home/jenkins/minikube-integration/19409-3774/.minikube/config/config.json: no such file or directory
	I0812 10:20:10.915828   10939 out.go:298] Setting JSON to true
	I0812 10:20:10.916671   10939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":152,"bootTime":1723457859,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:10.916734   10939 start.go:139] virtualization: kvm guest
	I0812 10:20:10.919018   10939 out.go:97] [download-only-821507] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0812 10:20:10.919140   10939 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball: no such file or directory
	I0812 10:20:10.919154   10939 notify.go:220] Checking for updates...
	I0812 10:20:10.920724   10939 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:10.922228   10939 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:10.923760   10939 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:20:10.925118   10939 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:20:10.926436   10939 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 10:20:10.928858   10939 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 10:20:10.929140   10939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:20:11.027953   10939 out.go:97] Using the kvm2 driver based on user configuration
	I0812 10:20:11.027980   10939 start.go:297] selected driver: kvm2
	I0812 10:20:11.027985   10939 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:20:11.028314   10939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:11.028451   10939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:20:11.044490   10939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:20:11.044547   10939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:20:11.045089   10939 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 10:20:11.045269   10939 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 10:20:11.045296   10939 cni.go:84] Creating CNI manager for ""
	I0812 10:20:11.045303   10939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:20:11.045314   10939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:20:11.045394   10939 start.go:340] cluster config:
	{Name:download-only-821507 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-821507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:20:11.045638   10939 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:11.047584   10939 out.go:97] Downloading VM boot image ...
	I0812 10:20:11.047629   10939 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:20:23.456398   10939 out.go:97] Starting "download-only-821507" primary control-plane node in "download-only-821507" cluster
	I0812 10:20:23.456429   10939 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 10:20:23.559773   10939 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 10:20:23.559815   10939 cache.go:56] Caching tarball of preloaded images
	I0812 10:20:23.559981   10939 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 10:20:23.562006   10939 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0812 10:20:23.562022   10939 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 10:20:23.659300   10939 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-821507 host does not exist
	  To start a cluster, run: "minikube start -p download-only-821507"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-821507
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-652906 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-652906 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.13462761s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-652906
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-652906: exit status 85 (56.379961ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-821507 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-821507        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-821507        | download-only-821507 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-652906 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-652906        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:37.978511   11208 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:37.978774   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:37.978785   11208 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:37.978789   11208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:37.978959   11208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:20:37.979542   11208 out.go:298] Setting JSON to true
	I0812 10:20:37.980371   11208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":179,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:37.980425   11208 start.go:139] virtualization: kvm guest
	I0812 10:20:37.982691   11208 out.go:97] [download-only-652906] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:20:37.982857   11208 notify.go:220] Checking for updates...
	I0812 10:20:37.984105   11208 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:37.985584   11208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:37.987077   11208 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:20:37.988392   11208 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:20:37.989794   11208 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 10:20:37.992121   11208 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 10:20:37.992363   11208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:20:38.026101   11208 out.go:97] Using the kvm2 driver based on user configuration
	I0812 10:20:38.026142   11208 start.go:297] selected driver: kvm2
	I0812 10:20:38.026151   11208 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:20:38.026642   11208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:38.026755   11208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:20:38.042444   11208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:20:38.042510   11208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:20:38.042981   11208 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 10:20:38.043130   11208 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 10:20:38.043179   11208 cni.go:84] Creating CNI manager for ""
	I0812 10:20:38.043191   11208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:20:38.043199   11208 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:20:38.043249   11208 start.go:340] cluster config:
	{Name:download-only-652906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-652906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:20:38.043334   11208 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:38.045149   11208 out.go:97] Starting "download-only-652906" primary control-plane node in "download-only-652906" cluster
	I0812 10:20:38.045176   11208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:20:38.549419   11208 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:20:38.549466   11208 cache.go:56] Caching tarball of preloaded images
	I0812 10:20:38.549658   11208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 10:20:38.551432   11208 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0812 10:20:38.551459   11208 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 10:20:38.648598   11208 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 10:20:52.436254   11208 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 10:20:52.436384   11208 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-652906 host does not exist
	  To start a cluster, run: "minikube start -p download-only-652906"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-652906
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (12.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-850332 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-850332 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.284299833s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (12.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-850332
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-850332: exit status 85 (54.574396ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-821507 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-821507           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-821507           | download-only-821507 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-652906 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-652906           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-652906           | download-only-652906 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-850332 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-850332           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:54.420958   11428 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:54.421072   11428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:54.421081   11428 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:54.421085   11428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:54.421294   11428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:20:54.421846   11428 out.go:298] Setting JSON to true
	I0812 10:20:54.422668   11428 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":195,"bootTime":1723457859,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:54.422729   11428 start.go:139] virtualization: kvm guest
	I0812 10:20:54.424858   11428 out.go:97] [download-only-850332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:20:54.425048   11428 notify.go:220] Checking for updates...
	I0812 10:20:54.426508   11428 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:54.427855   11428 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:54.429259   11428 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:20:54.430526   11428 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:20:54.431953   11428 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 10:20:54.434760   11428 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 10:20:54.435008   11428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:20:54.468003   11428 out.go:97] Using the kvm2 driver based on user configuration
	I0812 10:20:54.468033   11428 start.go:297] selected driver: kvm2
	I0812 10:20:54.468041   11428 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:20:54.468499   11428 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:54.468599   11428 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:20:54.484189   11428 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:20:54.484245   11428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:20:54.484732   11428 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 10:20:54.484877   11428 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 10:20:54.484905   11428 cni.go:84] Creating CNI manager for ""
	I0812 10:20:54.484915   11428 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 10:20:54.484923   11428 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:20:54.484977   11428 start.go:340] cluster config:
	{Name:download-only-850332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-850332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:20:54.485077   11428 iso.go:125] acquiring lock: {Name:mk817f4bb6a5031e68978029203802957205757f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:54.486828   11428 out.go:97] Starting "download-only-850332" primary control-plane node in "download-only-850332" cluster
	I0812 10:20:54.486848   11428 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0812 10:20:54.992360   11428 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 10:20:54.992396   11428 cache.go:56] Caching tarball of preloaded images
	I0812 10:20:54.992541   11428 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0812 10:20:54.994386   11428 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0812 10:20:54.994405   11428 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 10:20:55.093352   11428 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19409-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-850332 host does not exist
	  To start a cluster, run: "minikube start -p download-only-850332"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-850332
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-087798 --alsologtostderr --binary-mirror http://127.0.0.1:38789 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-087798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-087798
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (100.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-434049 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-434049 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.186021782s)
helpers_test.go:175: Cleaning up "offline-crio-434049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-434049
--- PASS: TestOffline (100.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-883541
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-883541: exit status 85 (55.515151ms)

                                                
                                                
-- stdout --
	* Profile "addons-883541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-883541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-883541
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-883541: exit status 85 (54.76926ms)

                                                
                                                
-- stdout --
	* Profile "addons-883541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-883541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (142.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-883541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-883541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.651569642s)
--- PASS: TestAddons/Setup (142.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-883541 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-883541 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.176306ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-xww5t" [bd991983-9d87-471c-b2ac-7cae341f9d1f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004762257s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8xczh" [7f708cb9-ae7f-4021-be11-218df27928d4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004782464s
addons_test.go:342: (dbg) Run:  kubectl --context addons-883541 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-883541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-883541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.165788926s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 ip
2024/08/12 10:24:05 [DEBUG] GET http://192.168.39.215:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b9vll" [3db94c10-8348-4c70-a22e-f47873db1f10] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004979325s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-883541
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-883541: (6.137470328s)
--- PASS: TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.876427ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-45ft9" [87ea7eab-fd15-420a-ad1a-20231ebf7ba3] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004818862s
addons_test.go:475: (dbg) Run:  kubectl --context addons-883541 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-883541 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.786938541s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.358175ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-883541 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-883541 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [baf8cb0a-e7b6-4dda-80dd-253c9b42adad] Pending
helpers_test.go:344: "task-pv-pod" [baf8cb0a-e7b6-4dda-80dd-253c9b42adad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [baf8cb0a-e7b6-4dda-80dd-253c9b42adad] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003474337s
addons_test.go:590: (dbg) Run:  kubectl --context addons-883541 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-883541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-883541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-883541 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-883541 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-883541 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-883541 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1a1a8f3c-e811-42d5-8439-698c67e08c00] Pending
helpers_test.go:344: "task-pv-pod-restore" [1a1a8f3c-e811-42d5-8439-698c67e08c00] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1a1a8f3c-e811-42d5-8439-698c67e08c00] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00425665s
addons_test.go:632: (dbg) Run:  kubectl --context addons-883541 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-883541 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-883541 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.768364838s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-883541 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-883541 --alsologtostderr -v=1: (1.045459882s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-gg4pz" [48713e96-7494-42b9-a813-aea97ee2893a] Pending
helpers_test.go:344: "headlamp-9d868696f-gg4pz" [48713e96-7494-42b9-a813-aea97ee2893a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-gg4pz" [48713e96-7494-42b9-a813-aea97ee2893a] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-gg4pz" [48713e96-7494-42b9-a813-aea97ee2893a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.00387725s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable headlamp --alsologtostderr -v=1: (5.722863819s)
--- PASS: TestAddons/parallel/Headlamp (23.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-85rt9" [95ae0610-d5d9-4796-b7cd-1cbc90742cb4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0139166s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-883541
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.39s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-883541 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-883541 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cfc1ab02-8401-463c-81ee-e6b9675ee331] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cfc1ab02-8401-463c-81ee-e6b9675ee331] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cfc1ab02-8401-463c-81ee-e6b9675ee331] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004823012s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-883541 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 ssh "cat /opt/local-path-provisioner/pvc-1f7cbad0-48c1-4940-b719-ed56d7f5b5f3_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-883541 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-883541 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.491071168s)
--- PASS: TestAddons/parallel/LocalPath (55.39s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r9hqx" [12e175a3-9d78-4c03-af1e-0b8ed635e01b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005920293s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-883541
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-f95wt" [1f06cbbc-677b-4312-9c36-7db27281396e] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003894378s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-883541 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-883541 addons disable yakd --alsologtostderr -v=1: (5.929055127s)
--- PASS: TestAddons/parallel/Yakd (11.93s)

                                                
                                    
x
+
TestCertOptions (81.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-967682 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-967682 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.828323889s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-967682 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-967682 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-967682 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-967682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-967682
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-967682: (1.390510806s)
--- PASS: TestCertOptions (81.72s)

                                                
                                    
x
+
TestCertExpiration (250.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002803 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002803 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (40.537968346s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002803 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002803 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.876630313s)
helpers_test.go:175: Cleaning up "cert-expiration-002803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-002803
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-002803: (1.045135297s)
--- PASS: TestCertExpiration (250.46s)

                                                
                                    
x
+
TestForceSystemdFlag (71.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-140876 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0812 11:30:45.936089   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-140876 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.052030343s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-140876 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-140876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-140876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-140876: (1.002946045s)
--- PASS: TestForceSystemdFlag (71.25s)

                                                
                                    
x
+
TestForceSystemdEnv (60.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-705953 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0812 11:30:28.983185   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-705953 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.364486541s)
helpers_test.go:175: Cleaning up "force-systemd-env-705953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-705953
--- PASS: TestForceSystemdEnv (60.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.88s)

                                                
                                    
x
+
TestErrorSpam/setup (39.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-302204 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-302204 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-302204 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-302204 --driver=kvm2  --container-runtime=crio: (39.547928558s)
--- PASS: TestErrorSpam/setup (39.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop: (1.530887378s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop: (1.285124525s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-302204 --log_dir /tmp/nospam-302204 stop: (1.785128223s)
--- PASS: TestErrorSpam/stop (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19409-3774/.minikube/files/etc/test/nested/copy/10927/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0812 10:33:30.975671   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:30.981795   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:30.992091   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:31.012406   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:31.052706   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:31.133063   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:31.293500   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:31.614049   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:32.254991   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:33.535507   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:36.096706   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:41.216941   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:33:51.458172   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-695176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.626061718s)
--- PASS: TestFunctional/serial/StartWithProxy (57.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --alsologtostderr -v=8
E0812 10:34:11.939182   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-695176 --alsologtostderr -v=8: (34.749167614s)
functional_test.go:663: soft start took 34.750053828s for "functional-695176" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-695176 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:3.1: (1.223520124s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:3.3: (1.299930083s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 cache add registry.k8s.io/pause:latest: (1.261705033s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-695176 /tmp/TestFunctionalserialCacheCmdcacheadd_local3057694130/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache add minikube-local-cache-test:functional-695176
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 cache add minikube-local-cache-test:functional-695176: (1.790181767s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache delete minikube-local-cache-test:functional-695176
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-695176
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.681347ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 cache reload: (1.054042565s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 kubectl -- --context functional-695176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-695176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0812 10:34:52.899466   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-695176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.749667891s)
functional_test.go:761: restart took 51.749769165s for "functional-695176" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (51.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-695176 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 logs: (1.38088815s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 logs --file /tmp/TestFunctionalserialLogsFileCmd2825987305/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 logs --file /tmp/TestFunctionalserialLogsFileCmd2825987305/001/logs.txt: (1.376466292s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-695176 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-695176
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-695176: exit status 115 (278.791222ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.45:30238 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-695176 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-695176 delete -f testdata/invalidsvc.yaml: (1.208490572s)
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 config get cpus: exit status 14 (46.595346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 config get cpus: exit status 14 (44.141599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-695176 --alsologtostderr -v=1]
E0812 10:36:14.820193   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-695176 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21427: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.677018ms)

                                                
                                                
-- stdout --
	* [functional-695176] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:36:12.548243   21336 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:36:12.548350   21336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:12.548360   21336 out.go:304] Setting ErrFile to fd 2...
	I0812 10:36:12.548366   21336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:12.548548   21336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:36:12.549142   21336 out.go:298] Setting JSON to false
	I0812 10:36:12.550054   21336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1114,"bootTime":1723457859,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:36:12.550121   21336 start.go:139] virtualization: kvm guest
	I0812 10:36:12.552405   21336 out.go:177] * [functional-695176] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:36:12.553993   21336 notify.go:220] Checking for updates...
	I0812 10:36:12.554022   21336 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:36:12.555767   21336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:36:12.557434   21336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:36:12.558896   21336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:12.560504   21336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:36:12.562092   21336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:36:12.563947   21336 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:36:12.564420   21336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:12.564498   21336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:12.579883   21336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0812 10:36:12.580325   21336 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:12.581005   21336 main.go:141] libmachine: Using API Version  1
	I0812 10:36:12.581033   21336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:12.581373   21336 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:12.581602   21336 main.go:141] libmachine: (functional-695176) Calling .DriverName
	I0812 10:36:12.581833   21336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:36:12.582116   21336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:12.582148   21336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:12.597175   21336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0812 10:36:12.597619   21336 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:12.598152   21336 main.go:141] libmachine: Using API Version  1
	I0812 10:36:12.598180   21336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:12.598494   21336 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:12.598666   21336 main.go:141] libmachine: (functional-695176) Calling .DriverName
	I0812 10:36:12.631739   21336 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 10:36:12.633715   21336 start.go:297] selected driver: kvm2
	I0812 10:36:12.633730   21336 start.go:901] validating driver "kvm2" against &{Name:functional-695176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-695176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:36:12.633875   21336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:36:12.636320   21336 out.go:177] 
	W0812 10:36:12.637980   21336 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0812 10:36:12.639016   21336 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.349678ms)

                                                
                                                
-- stdout --
	* [functional-695176] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:36:11.461082   21202 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:36:11.461192   21202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:11.461206   21202 out.go:304] Setting ErrFile to fd 2...
	I0812 10:36:11.461211   21202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:36:11.461498   21202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 10:36:11.461997   21202 out.go:298] Setting JSON to false
	I0812 10:36:11.462890   21202 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1112,"bootTime":1723457859,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:36:11.462952   21202 start.go:139] virtualization: kvm guest
	I0812 10:36:11.465341   21202 out.go:177] * [functional-695176] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0812 10:36:11.467054   21202 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:36:11.467099   21202 notify.go:220] Checking for updates...
	I0812 10:36:11.470156   21202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:36:11.471728   21202 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 10:36:11.473022   21202 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 10:36:11.474350   21202 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:36:11.475872   21202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:36:11.477607   21202 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 10:36:11.478005   21202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:11.478087   21202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:11.493230   21202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0812 10:36:11.493631   21202 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:11.494229   21202 main.go:141] libmachine: Using API Version  1
	I0812 10:36:11.494260   21202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:11.494641   21202 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:11.494859   21202 main.go:141] libmachine: (functional-695176) Calling .DriverName
	I0812 10:36:11.495112   21202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:36:11.495452   21202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 10:36:11.495493   21202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:36:11.510469   21202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0812 10:36:11.510955   21202 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:36:11.511633   21202 main.go:141] libmachine: Using API Version  1
	I0812 10:36:11.511667   21202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:36:11.512033   21202 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:36:11.512207   21202 main.go:141] libmachine: (functional-695176) Calling .DriverName
	I0812 10:36:11.547012   21202 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0812 10:36:11.548569   21202 start.go:297] selected driver: kvm2
	I0812 10:36:11.548586   21202 start.go:901] validating driver "kvm2" against &{Name:functional-695176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-695176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:36:11.548731   21202 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:36:11.551214   21202 out.go:177] 
	W0812 10:36:11.552619   21202 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0812 10:36:11.554169   21202 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-695176 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-695176 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2xq7k" [e92826dc-b147-4025-9826-8da69f2a0aa7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2xq7k" [e92826dc-b147-4025-9826-8da69f2a0aa7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004757439s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.45:32397
functional_test.go:1675: http://192.168.39.45:32397: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2xq7k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.45:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.45:32397
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [831bc204-3a68-4c46-9276-8f529bbfa907] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003607137s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-695176 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-695176 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-695176 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-695176 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-695176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [133b1f43-174a-4438-9fec-8e47f54424f1] Pending
helpers_test.go:344: "sp-pod" [133b1f43-174a-4438-9fec-8e47f54424f1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [133b1f43-174a-4438-9fec-8e47f54424f1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004536285s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-695176 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-695176 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-695176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [97b4b050-91b6-4a34-a946-d19dfe315500] Pending
helpers_test.go:344: "sp-pod" [97b4b050-91b6-4a34-a946-d19dfe315500] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [97b4b050-91b6-4a34-a946-d19dfe315500] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004693272s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-695176 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh -n functional-695176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cp functional-695176:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd363987125/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh -n functional-695176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh -n functional-695176 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-695176 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-78cpl" [7239b47f-1026-46c3-8278-189a7c6159e4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-78cpl" [7239b47f-1026-46c3-8278-189a7c6159e4] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003757583s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-695176 exec mysql-64454c8b5c-78cpl -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-695176 exec mysql-64454c8b5c-78cpl -- mysql -ppassword -e "show databases;": exit status 1 (240.843088ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-695176 exec mysql-64454c8b5c-78cpl -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-695176 exec mysql-64454c8b5c-78cpl -- mysql -ppassword -e "show databases;": exit status 1 (213.556235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-695176 exec mysql-64454c8b5c-78cpl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10927/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /etc/test/nested/copy/10927/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10927.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /etc/ssl/certs/10927.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10927.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /usr/share/ca-certificates/10927.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/109272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /etc/ssl/certs/109272.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/109272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /usr/share/ca-certificates/109272.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-695176 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "sudo systemctl is-active docker": exit status 1 (229.477134ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "sudo systemctl is-active containerd": exit status 1 (192.151448ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695176 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-695176
localhost/kicbase/echo-server:functional-695176
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695176 image ls --format short --alsologtostderr:
I0812 10:36:22.042110   21683 out.go:291] Setting OutFile to fd 1 ...
I0812 10:36:22.042350   21683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.042357   21683 out.go:304] Setting ErrFile to fd 2...
I0812 10:36:22.042361   21683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.042563   21683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
I0812 10:36:22.043123   21683 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.043251   21683 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.043705   21683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.043749   21683 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.058498   21683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
I0812 10:36:22.058908   21683 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.059458   21683 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.059486   21683 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.059864   21683 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.060139   21683 main.go:141] libmachine: (functional-695176) Calling .GetState
I0812 10:36:22.062087   21683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.062135   21683 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.077550   21683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
I0812 10:36:22.077977   21683 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.078478   21683 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.078499   21683 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.078855   21683 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.079061   21683 main.go:141] libmachine: (functional-695176) Calling .DriverName
I0812 10:36:22.079282   21683 ssh_runner.go:195] Run: systemctl --version
I0812 10:36:22.079328   21683 main.go:141] libmachine: (functional-695176) Calling .GetSSHHostname
I0812 10:36:22.082267   21683 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.082688   21683 main.go:141] libmachine: (functional-695176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:4c:4f", ip: ""} in network mk-functional-695176: {Iface:virbr1 ExpiryTime:2024-08-12 11:33:19 +0000 UTC Type:0 Mac:52:54:00:5a:4c:4f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:functional-695176 Clientid:01:52:54:00:5a:4c:4f}
I0812 10:36:22.082714   21683 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined IP address 192.168.39.45 and MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.082891   21683 main.go:141] libmachine: (functional-695176) Calling .GetSSHPort
I0812 10:36:22.083061   21683 main.go:141] libmachine: (functional-695176) Calling .GetSSHKeyPath
I0812 10:36:22.083285   21683 main.go:141] libmachine: (functional-695176) Calling .GetSSHUsername
I0812 10:36:22.083502   21683 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/functional-695176/id_rsa Username:docker}
I0812 10:36:22.163892   21683 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 10:36:22.203092   21683 main.go:141] libmachine: Making call to close driver server
I0812 10:36:22.203109   21683 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:22.203390   21683 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:22.203423   21683 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:22.203428   21683 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:22.203439   21683 main.go:141] libmachine: Making call to close driver server
I0812 10:36:22.203472   21683 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:22.203758   21683 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:22.203777   21683 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:22.203764   21683 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695176 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-695176  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-695176  | 7f8120cdd7f12 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695176 image ls --format table --alsologtostderr:
I0812 10:36:23.568960   21933 out.go:291] Setting OutFile to fd 1 ...
I0812 10:36:23.569184   21933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:23.569191   21933 out.go:304] Setting ErrFile to fd 2...
I0812 10:36:23.569195   21933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:23.569387   21933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
I0812 10:36:23.569892   21933 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:23.569978   21933 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:23.570320   21933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:23.570357   21933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:23.584980   21933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
I0812 10:36:23.585479   21933 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:23.586114   21933 main.go:141] libmachine: Using API Version  1
I0812 10:36:23.586137   21933 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:23.586436   21933 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:23.586618   21933 main.go:141] libmachine: (functional-695176) Calling .GetState
I0812 10:36:23.588326   21933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:23.588361   21933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:23.603732   21933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
I0812 10:36:23.604207   21933 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:23.604682   21933 main.go:141] libmachine: Using API Version  1
I0812 10:36:23.604709   21933 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:23.605054   21933 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:23.605262   21933 main.go:141] libmachine: (functional-695176) Calling .DriverName
I0812 10:36:23.605477   21933 ssh_runner.go:195] Run: systemctl --version
I0812 10:36:23.605511   21933 main.go:141] libmachine: (functional-695176) Calling .GetSSHHostname
I0812 10:36:23.608530   21933 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:23.608931   21933 main.go:141] libmachine: (functional-695176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:4c:4f", ip: ""} in network mk-functional-695176: {Iface:virbr1 ExpiryTime:2024-08-12 11:33:19 +0000 UTC Type:0 Mac:52:54:00:5a:4c:4f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:functional-695176 Clientid:01:52:54:00:5a:4c:4f}
I0812 10:36:23.608959   21933 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined IP address 192.168.39.45 and MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:23.609145   21933 main.go:141] libmachine: (functional-695176) Calling .GetSSHPort
I0812 10:36:23.609296   21933 main.go:141] libmachine: (functional-695176) Calling .GetSSHKeyPath
I0812 10:36:23.609441   21933 main.go:141] libmachine: (functional-695176) Calling .GetSSHUsername
I0812 10:36:23.609583   21933 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/functional-695176/id_rsa Username:docker}
I0812 10:36:23.687157   21933 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 10:36:23.724364   21933 main.go:141] libmachine: Making call to close driver server
I0812 10:36:23.724384   21933 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:23.724653   21933 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:23.724675   21933 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:23.724692   21933 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:23.724813   21933 main.go:141] libmachine: Making call to close driver server
I0812 10:36:23.724891   21933 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:23.725137   21933 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:23.725157   21933 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695176 image ls --format json --alsologtostderr:
[{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["reg
istry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8
s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-695176"],"size":"4943877"},{"id":"7f8120cdd7f12194e91318a0af384d4adfd278964b35f1af0b38d40856a76ef6","repoDigests":["localhost/minikube-local-cache-test@sha256:736519c059f5e9bfe28df2d4f2a28daf100d051c9f8d490255f1bfe232a9dd32"],"repoTags":["localhost/minikube-local-cache-test:functional-695176"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause
@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.i
o/library/nginx:latest"],"size":"191750286"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c
6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695176 image ls --format json --alsologtostderr:
I0812 10:36:23.361019   21909 out.go:291] Setting OutFile to fd 1 ...
I0812 10:36:23.361133   21909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:23.361143   21909 out.go:304] Setting ErrFile to fd 2...
I0812 10:36:23.361149   21909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:23.361362   21909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
I0812 10:36:23.361923   21909 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:23.362042   21909 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:23.362403   21909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:23.362459   21909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:23.377476   21909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
I0812 10:36:23.378014   21909 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:23.378618   21909 main.go:141] libmachine: Using API Version  1
I0812 10:36:23.378644   21909 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:23.379049   21909 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:23.379263   21909 main.go:141] libmachine: (functional-695176) Calling .GetState
I0812 10:36:23.381245   21909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:23.381282   21909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:23.396219   21909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
I0812 10:36:23.396727   21909 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:23.397242   21909 main.go:141] libmachine: Using API Version  1
I0812 10:36:23.397265   21909 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:23.397561   21909 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:23.397784   21909 main.go:141] libmachine: (functional-695176) Calling .DriverName
I0812 10:36:23.398052   21909 ssh_runner.go:195] Run: systemctl --version
I0812 10:36:23.398090   21909 main.go:141] libmachine: (functional-695176) Calling .GetSSHHostname
I0812 10:36:23.400814   21909 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:23.401217   21909 main.go:141] libmachine: (functional-695176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:4c:4f", ip: ""} in network mk-functional-695176: {Iface:virbr1 ExpiryTime:2024-08-12 11:33:19 +0000 UTC Type:0 Mac:52:54:00:5a:4c:4f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:functional-695176 Clientid:01:52:54:00:5a:4c:4f}
I0812 10:36:23.401239   21909 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined IP address 192.168.39.45 and MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:23.401378   21909 main.go:141] libmachine: (functional-695176) Calling .GetSSHPort
I0812 10:36:23.401556   21909 main.go:141] libmachine: (functional-695176) Calling .GetSSHKeyPath
I0812 10:36:23.401820   21909 main.go:141] libmachine: (functional-695176) Calling .GetSSHUsername
I0812 10:36:23.401961   21909 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/functional-695176/id_rsa Username:docker}
I0812 10:36:23.479712   21909 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 10:36:23.520349   21909 main.go:141] libmachine: Making call to close driver server
I0812 10:36:23.520362   21909 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:23.520644   21909 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:23.520664   21909 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:23.520684   21909 main.go:141] libmachine: Making call to close driver server
I0812 10:36:23.520692   21909 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:23.520664   21909 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:23.521017   21909 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:23.521093   21909 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:23.521123   21909 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695176 image ls --format yaml --alsologtostderr:
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-695176
size: "4943877"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7f8120cdd7f12194e91318a0af384d4adfd278964b35f1af0b38d40856a76ef6
repoDigests:
- localhost/minikube-local-cache-test@sha256:736519c059f5e9bfe28df2d4f2a28daf100d051c9f8d490255f1bfe232a9dd32
repoTags:
- localhost/minikube-local-cache-test:functional-695176
size: "3330"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695176 image ls --format yaml --alsologtostderr:
I0812 10:36:22.248365   21723 out.go:291] Setting OutFile to fd 1 ...
I0812 10:36:22.248496   21723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.248517   21723 out.go:304] Setting ErrFile to fd 2...
I0812 10:36:22.248525   21723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.248728   21723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
I0812 10:36:22.249313   21723 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.249409   21723 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.249810   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.249856   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.265780   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
I0812 10:36:22.266238   21723 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.266726   21723 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.266752   21723 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.267129   21723 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.267326   21723 main.go:141] libmachine: (functional-695176) Calling .GetState
I0812 10:36:22.269379   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.269431   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.284775   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
I0812 10:36:22.285194   21723 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.285685   21723 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.285704   21723 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.286048   21723 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.286227   21723 main.go:141] libmachine: (functional-695176) Calling .DriverName
I0812 10:36:22.286505   21723 ssh_runner.go:195] Run: systemctl --version
I0812 10:36:22.286533   21723 main.go:141] libmachine: (functional-695176) Calling .GetSSHHostname
I0812 10:36:22.289271   21723 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.289684   21723 main.go:141] libmachine: (functional-695176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:4c:4f", ip: ""} in network mk-functional-695176: {Iface:virbr1 ExpiryTime:2024-08-12 11:33:19 +0000 UTC Type:0 Mac:52:54:00:5a:4c:4f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:functional-695176 Clientid:01:52:54:00:5a:4c:4f}
I0812 10:36:22.289723   21723 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined IP address 192.168.39.45 and MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.289860   21723 main.go:141] libmachine: (functional-695176) Calling .GetSSHPort
I0812 10:36:22.290022   21723 main.go:141] libmachine: (functional-695176) Calling .GetSSHKeyPath
I0812 10:36:22.290153   21723 main.go:141] libmachine: (functional-695176) Calling .GetSSHUsername
I0812 10:36:22.290268   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/functional-695176/id_rsa Username:docker}
I0812 10:36:22.367684   21723 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 10:36:22.410382   21723 main.go:141] libmachine: Making call to close driver server
I0812 10:36:22.410395   21723 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:22.410667   21723 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:22.410745   21723 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:22.410768   21723 main.go:141] libmachine: Making call to close driver server
I0812 10:36:22.410778   21723 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:22.410703   21723 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:22.411013   21723 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:22.411029   21723 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:22.411033   21723 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh pgrep buildkitd: exit status 1 (187.748662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image build -t localhost/my-image:functional-695176 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 image build -t localhost/my-image:functional-695176 testdata/build --alsologtostderr: (2.853594535s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695176 image build -t localhost/my-image:functional-695176 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 307a06c11bb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-695176
--> d69d02e70f9
Successfully tagged localhost/my-image:functional-695176
d69d02e70f99c94934510c67a15086e141565113b91bb8631c66f2133dd2bcfc
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695176 image build -t localhost/my-image:functional-695176 testdata/build --alsologtostderr:
I0812 10:36:22.644569   21800 out.go:291] Setting OutFile to fd 1 ...
I0812 10:36:22.644844   21800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.644853   21800 out.go:304] Setting ErrFile to fd 2...
I0812 10:36:22.644857   21800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:36:22.645111   21800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
I0812 10:36:22.645693   21800 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.646212   21800 config.go:182] Loaded profile config "functional-695176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 10:36:22.646663   21800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.646710   21800 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.663378   21800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
I0812 10:36:22.663811   21800 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.664381   21800 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.664406   21800 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.664726   21800 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.664937   21800 main.go:141] libmachine: (functional-695176) Calling .GetState
I0812 10:36:22.667171   21800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 10:36:22.667213   21800 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:36:22.681949   21800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
I0812 10:36:22.682496   21800 main.go:141] libmachine: () Calling .GetVersion
I0812 10:36:22.683082   21800 main.go:141] libmachine: Using API Version  1
I0812 10:36:22.683115   21800 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:36:22.683477   21800 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:36:22.683660   21800 main.go:141] libmachine: (functional-695176) Calling .DriverName
I0812 10:36:22.683879   21800 ssh_runner.go:195] Run: systemctl --version
I0812 10:36:22.683910   21800 main.go:141] libmachine: (functional-695176) Calling .GetSSHHostname
I0812 10:36:22.686917   21800 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.687418   21800 main.go:141] libmachine: (functional-695176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:4c:4f", ip: ""} in network mk-functional-695176: {Iface:virbr1 ExpiryTime:2024-08-12 11:33:19 +0000 UTC Type:0 Mac:52:54:00:5a:4c:4f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:functional-695176 Clientid:01:52:54:00:5a:4c:4f}
I0812 10:36:22.687448   21800 main.go:141] libmachine: (functional-695176) DBG | domain functional-695176 has defined IP address 192.168.39.45 and MAC address 52:54:00:5a:4c:4f in network mk-functional-695176
I0812 10:36:22.687466   21800 main.go:141] libmachine: (functional-695176) Calling .GetSSHPort
I0812 10:36:22.687617   21800 main.go:141] libmachine: (functional-695176) Calling .GetSSHKeyPath
I0812 10:36:22.687805   21800 main.go:141] libmachine: (functional-695176) Calling .GetSSHUsername
I0812 10:36:22.687911   21800 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/functional-695176/id_rsa Username:docker}
I0812 10:36:22.784457   21800 build_images.go:161] Building image from path: /tmp/build.2875819532.tar
I0812 10:36:22.784530   21800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0812 10:36:22.797579   21800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2875819532.tar
I0812 10:36:22.803834   21800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2875819532.tar: stat -c "%s %y" /var/lib/minikube/build/build.2875819532.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2875819532.tar': No such file or directory
I0812 10:36:22.803859   21800 ssh_runner.go:362] scp /tmp/build.2875819532.tar --> /var/lib/minikube/build/build.2875819532.tar (3072 bytes)
I0812 10:36:22.832773   21800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2875819532
I0812 10:36:22.846559   21800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2875819532 -xf /var/lib/minikube/build/build.2875819532.tar
I0812 10:36:22.858162   21800 crio.go:315] Building image: /var/lib/minikube/build/build.2875819532
I0812 10:36:22.858243   21800 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-695176 /var/lib/minikube/build/build.2875819532 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0812 10:36:25.430909   21800 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-695176 /var/lib/minikube/build/build.2875819532 --cgroup-manager=cgroupfs: (2.572626039s)
I0812 10:36:25.430989   21800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2875819532
I0812 10:36:25.443255   21800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2875819532.tar
I0812 10:36:25.452654   21800 build_images.go:217] Built localhost/my-image:functional-695176 from /tmp/build.2875819532.tar
I0812 10:36:25.452696   21800 build_images.go:133] succeeded building to: functional-695176
I0812 10:36:25.452701   21800 build_images.go:134] failed building to: 
I0812 10:36:25.452722   21800 main.go:141] libmachine: Making call to close driver server
I0812 10:36:25.452730   21800 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:25.453033   21800 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:25.453052   21800 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:36:25.453067   21800 main.go:141] libmachine: Making call to close driver server
I0812 10:36:25.453069   21800 main.go:141] libmachine: (functional-695176) DBG | Closing plugin on server side
I0812 10:36:25.453076   21800 main.go:141] libmachine: (functional-695176) Calling .Close
I0812 10:36:25.453307   21800 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:36:25.453320   21800 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
2024/08/12 10:36:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.833920813s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-695176
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image load --daemon kicbase/echo-server:functional-695176 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 image load --daemon kicbase/echo-server:functional-695176 --alsologtostderr: (1.151381861s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdany-port536550888/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723458948866249333" to /tmp/TestFunctionalparallelMountCmdany-port536550888/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723458948866249333" to /tmp/TestFunctionalparallelMountCmdany-port536550888/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723458948866249333" to /tmp/TestFunctionalparallelMountCmdany-port536550888/001/test-1723458948866249333
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.721925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 12 10:35 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 12 10:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 12 10:35 test-1723458948866249333
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh cat /mount-9p/test-1723458948866249333
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-695176 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bbe0d340-2037-4526-a54b-a188f92fd4f6] Pending
helpers_test.go:344: "busybox-mount" [bbe0d340-2037-4526-a54b-a188f92fd4f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bbe0d340-2037-4526-a54b-a188f92fd4f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bbe0d340-2037-4526-a54b-a188f92fd4f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.016339806s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-695176 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdany-port536550888/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image load --daemon kicbase/echo-server:functional-695176 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-695176
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image load --daemon kicbase/echo-server:functional-695176 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image save kicbase/echo-server:functional-695176 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 image save kicbase/echo-server:functional-695176 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.579906014s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image rm kicbase/echo-server:functional-695176 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.189247801s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-695176
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 image save --daemon kicbase/echo-server:functional-695176 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-695176
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdspecific-port910861286/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.77245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdspecific-port910861286/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "sudo umount -f /mount-9p": exit status 1 (211.412573ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-695176 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdspecific-port910861286/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T" /mount1: exit status 1 (277.920092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-695176 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2627011264/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-695176 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-695176 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-gvxrm" [048af2b4-314a-4682-b515-3539d90ceb76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-gvxrm" [048af2b4-314a-4682-b515-3539d90ceb76] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004650451s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "264.34927ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "45.526375ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "278.701264ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.780847ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 service list: (1.277453769s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-695176 service list -o json: (1.255212472s)
functional_test.go:1494: Took "1.255309781s" to run "out/minikube-linux-amd64 -p functional-695176 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.45:30607
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-695176 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.45:30607
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-695176
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-695176
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-695176
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (218.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-919901 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 10:38:30.975377   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 10:38:58.660509   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-919901 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m37.404921334s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (218.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-919901 -- rollout status deployment/busybox: (4.057372518s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-46rph -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-pj8gg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-v6ddx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-46rph -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-pj8gg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-v6ddx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-46rph -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-pj8gg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-v6ddx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-46rph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-46rph -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-pj8gg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-pj8gg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-v6ddx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-919901 -- exec busybox-fc5497c4f-v6ddx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-919901 -v=7 --alsologtostderr
E0812 10:40:45.936045   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:45.941439   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:45.951868   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:45.972257   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:46.012551   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:46.092903   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:46.253198   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:46.573778   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:47.214994   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:48.495667   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:51.056410   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:40:56.176683   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:41:06.417446   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-919901 -v=7 --alsologtostderr: (52.344131722s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-919901 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp testdata/cp-test.txt ha-919901:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901:/home/docker/cp-test.txt ha-919901-m02:/home/docker/cp-test_ha-919901_ha-919901-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test_ha-919901_ha-919901-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901:/home/docker/cp-test.txt ha-919901-m03:/home/docker/cp-test_ha-919901_ha-919901-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test_ha-919901_ha-919901-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901:/home/docker/cp-test.txt ha-919901-m04:/home/docker/cp-test_ha-919901_ha-919901-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test_ha-919901_ha-919901-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp testdata/cp-test.txt ha-919901-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m02:/home/docker/cp-test.txt ha-919901:/home/docker/cp-test_ha-919901-m02_ha-919901.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test_ha-919901-m02_ha-919901.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m02:/home/docker/cp-test.txt ha-919901-m03:/home/docker/cp-test_ha-919901-m02_ha-919901-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test_ha-919901-m02_ha-919901-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m02:/home/docker/cp-test.txt ha-919901-m04:/home/docker/cp-test_ha-919901-m02_ha-919901-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test_ha-919901-m02_ha-919901-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp testdata/cp-test.txt ha-919901-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt ha-919901:/home/docker/cp-test_ha-919901-m03_ha-919901.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test_ha-919901-m03_ha-919901.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt ha-919901-m02:/home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test_ha-919901-m03_ha-919901-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m03:/home/docker/cp-test.txt ha-919901-m04:/home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test_ha-919901-m03_ha-919901-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp testdata/cp-test.txt ha-919901-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2587644134/001/cp-test_ha-919901-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt ha-919901:/home/docker/cp-test_ha-919901-m04_ha-919901.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901 "sudo cat /home/docker/cp-test_ha-919901-m04_ha-919901.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt ha-919901-m02:/home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt
E0812 10:41:26.898464   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m02 "sudo cat /home/docker/cp-test_ha-919901-m04_ha-919901-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 cp ha-919901-m04:/home/docker/cp-test.txt ha-919901-m03:/home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 ssh -n ha-919901-m03 "sudo cat /home/docker/cp-test_ha-919901-m04_ha-919901-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.491945618s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-919901 node delete m03 -v=7 --alsologtostderr: (16.33799026s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-919901 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 10:55:45.938447   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:57:08.981475   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 10:58:30.975961   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-919901 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.482176185s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-919901 --control-plane -v=7 --alsologtostderr
E0812 11:00:45.935981   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-919901 --control-plane -v=7 --alsologtostderr: (1m19.660919648s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-919901 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-319159 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-319159 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.078601345s)
--- PASS: TestJSONOutput/start/Command (55.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-319159 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-319159 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-319159 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-319159 --output=json --user=testUser: (6.70059082s)
--- PASS: TestJSONOutput/stop/Command (6.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-770570 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-770570 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.135014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7b91795-3690-49b4-a8be-edb1d22cb8d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-770570] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"750a4c38-28e2-424f-b9da-60f4e37690f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19409"}}
	{"specversion":"1.0","id":"5906f769-bd48-4053-88ec-9878cea6a6a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"52632088-a8b4-40e4-b1eb-214f212b87e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig"}}
	{"specversion":"1.0","id":"c0f5e8b2-9c22-4bca-9054-46727e8380ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube"}}
	{"specversion":"1.0","id":"3ce993f1-087b-4bbc-b426-3a3f8d6241ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7d4df191-ea7e-45db-b948-cd2006b18fc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87da0117-8778-46f9-9e49-0f56aea19f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-770570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-770570
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-210693 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-210693 --driver=kvm2  --container-runtime=crio: (40.14135222s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-213000 --driver=kvm2  --container-runtime=crio
E0812 11:03:30.975292   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-213000 --driver=kvm2  --container-runtime=crio: (44.084031001s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-210693
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-213000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-213000
helpers_test.go:175: Cleaning up "first-210693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-210693
--- PASS: TestMinikubeProfile (86.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-215088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-215088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.222823984s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-215088 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-215088 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-227387 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-227387 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.040954467s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-215088 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-227387
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-227387: (2.278275674s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-227387
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-227387: (21.948626985s)
--- PASS: TestMountStart/serial/RestartStopped (22.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-227387 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053297 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 11:05:45.936538   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 11:06:34.022588   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053297 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m1.865788571s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-053297 -- rollout status deployment/busybox: (4.139004318s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-242jl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-z9kcl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-242jl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-z9kcl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-242jl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-z9kcl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-242jl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-242jl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-z9kcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053297 -- exec busybox-fc5497c4f-z9kcl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-053297 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-053297 -v 3 --alsologtostderr: (47.606733712s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-053297 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp testdata/cp-test.txt multinode-053297:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297:/home/docker/cp-test.txt multinode-053297-m02:/home/docker/cp-test_multinode-053297_multinode-053297-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test_multinode-053297_multinode-053297-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297:/home/docker/cp-test.txt multinode-053297-m03:/home/docker/cp-test_multinode-053297_multinode-053297-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test_multinode-053297_multinode-053297-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp testdata/cp-test.txt multinode-053297-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt multinode-053297:/home/docker/cp-test_multinode-053297-m02_multinode-053297.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test_multinode-053297-m02_multinode-053297.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m02:/home/docker/cp-test.txt multinode-053297-m03:/home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test_multinode-053297-m02_multinode-053297-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp testdata/cp-test.txt multinode-053297-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4188486420/001/cp-test_multinode-053297-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt multinode-053297:/home/docker/cp-test_multinode-053297-m03_multinode-053297.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297 "sudo cat /home/docker/cp-test_multinode-053297-m03_multinode-053297.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 cp multinode-053297-m03:/home/docker/cp-test.txt multinode-053297-m02:/home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 ssh -n multinode-053297-m02 "sudo cat /home/docker/cp-test_multinode-053297-m03_multinode-053297-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-053297 node stop m03: (1.391017745s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053297 status: exit status 7 (418.648005ms)

                                                
                                                
-- stdout --
	multinode-053297
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-053297-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-053297-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr: exit status 7 (414.849075ms)

                                                
                                                
-- stdout --
	multinode-053297
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-053297-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-053297-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:08:11.510538   39375 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:08:11.510686   39375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:08:11.510695   39375 out.go:304] Setting ErrFile to fd 2...
	I0812 11:08:11.510701   39375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:08:11.510928   39375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:08:11.511135   39375 out.go:298] Setting JSON to false
	I0812 11:08:11.511164   39375 mustload.go:65] Loading cluster: multinode-053297
	I0812 11:08:11.511220   39375 notify.go:220] Checking for updates...
	I0812 11:08:11.511599   39375 config.go:182] Loaded profile config "multinode-053297": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:08:11.511616   39375 status.go:255] checking status of multinode-053297 ...
	I0812 11:08:11.511986   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.512064   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.531156   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0812 11:08:11.531618   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.532164   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.532182   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.532660   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.532912   39375 main.go:141] libmachine: (multinode-053297) Calling .GetState
	I0812 11:08:11.534753   39375 status.go:330] multinode-053297 host status = "Running" (err=<nil>)
	I0812 11:08:11.534775   39375 host.go:66] Checking if "multinode-053297" exists ...
	I0812 11:08:11.535154   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.535211   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.550811   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0812 11:08:11.551220   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.551687   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.551703   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.552047   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.552316   39375 main.go:141] libmachine: (multinode-053297) Calling .GetIP
	I0812 11:08:11.555180   39375 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:08:11.555632   39375 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:08:11.555660   39375 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:08:11.555806   39375 host.go:66] Checking if "multinode-053297" exists ...
	I0812 11:08:11.556198   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.556242   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.572991   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I0812 11:08:11.573381   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.573970   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.573997   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.574304   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.574517   39375 main.go:141] libmachine: (multinode-053297) Calling .DriverName
	I0812 11:08:11.574711   39375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 11:08:11.574740   39375 main.go:141] libmachine: (multinode-053297) Calling .GetSSHHostname
	I0812 11:08:11.577443   39375 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:08:11.577839   39375 main.go:141] libmachine: (multinode-053297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:99:5e", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:05:19 +0000 UTC Type:0 Mac:52:54:00:b2:99:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-053297 Clientid:01:52:54:00:b2:99:5e}
	I0812 11:08:11.577870   39375 main.go:141] libmachine: (multinode-053297) DBG | domain multinode-053297 has defined IP address 192.168.39.95 and MAC address 52:54:00:b2:99:5e in network mk-multinode-053297
	I0812 11:08:11.578035   39375 main.go:141] libmachine: (multinode-053297) Calling .GetSSHPort
	I0812 11:08:11.578217   39375 main.go:141] libmachine: (multinode-053297) Calling .GetSSHKeyPath
	I0812 11:08:11.578353   39375 main.go:141] libmachine: (multinode-053297) Calling .GetSSHUsername
	I0812 11:08:11.578490   39375 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297/id_rsa Username:docker}
	I0812 11:08:11.660307   39375 ssh_runner.go:195] Run: systemctl --version
	I0812 11:08:11.666257   39375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:08:11.681108   39375 kubeconfig.go:125] found "multinode-053297" server: "https://192.168.39.95:8443"
	I0812 11:08:11.681136   39375 api_server.go:166] Checking apiserver status ...
	I0812 11:08:11.681182   39375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:08:11.694366   39375 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup
	W0812 11:08:11.703773   39375 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:08:11.703841   39375 ssh_runner.go:195] Run: ls
	I0812 11:08:11.708071   39375 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0812 11:08:11.712186   39375 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0812 11:08:11.712210   39375 status.go:422] multinode-053297 apiserver status = Running (err=<nil>)
	I0812 11:08:11.712219   39375 status.go:257] multinode-053297 status: &{Name:multinode-053297 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 11:08:11.712234   39375 status.go:255] checking status of multinode-053297-m02 ...
	I0812 11:08:11.712517   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.712549   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.727810   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0812 11:08:11.728230   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.728634   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.728653   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.728990   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.729227   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetState
	I0812 11:08:11.731032   39375 status.go:330] multinode-053297-m02 host status = "Running" (err=<nil>)
	I0812 11:08:11.731051   39375 host.go:66] Checking if "multinode-053297-m02" exists ...
	I0812 11:08:11.731361   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.731399   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.747278   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0812 11:08:11.747737   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.748224   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.748244   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.748626   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.748836   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetIP
	I0812 11:08:11.752099   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | domain multinode-053297-m02 has defined MAC address 52:54:00:fe:c6:e9 in network mk-multinode-053297
	I0812 11:08:11.752657   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c6:e9", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:06:30 +0000 UTC Type:0 Mac:52:54:00:fe:c6:e9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-053297-m02 Clientid:01:52:54:00:fe:c6:e9}
	I0812 11:08:11.752699   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | domain multinode-053297-m02 has defined IP address 192.168.39.9 and MAC address 52:54:00:fe:c6:e9 in network mk-multinode-053297
	I0812 11:08:11.752844   39375 host.go:66] Checking if "multinode-053297-m02" exists ...
	I0812 11:08:11.753201   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.753247   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.769427   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0812 11:08:11.769835   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.770383   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.770412   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.770731   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.770970   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .DriverName
	I0812 11:08:11.771169   39375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 11:08:11.771194   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetSSHHostname
	I0812 11:08:11.773791   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | domain multinode-053297-m02 has defined MAC address 52:54:00:fe:c6:e9 in network mk-multinode-053297
	I0812 11:08:11.774281   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c6:e9", ip: ""} in network mk-multinode-053297: {Iface:virbr1 ExpiryTime:2024-08-12 12:06:30 +0000 UTC Type:0 Mac:52:54:00:fe:c6:e9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-053297-m02 Clientid:01:52:54:00:fe:c6:e9}
	I0812 11:08:11.774309   39375 main.go:141] libmachine: (multinode-053297-m02) DBG | domain multinode-053297-m02 has defined IP address 192.168.39.9 and MAC address 52:54:00:fe:c6:e9 in network mk-multinode-053297
	I0812 11:08:11.774456   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetSSHPort
	I0812 11:08:11.774638   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetSSHKeyPath
	I0812 11:08:11.774815   39375 main.go:141] libmachine: (multinode-053297-m02) Calling .GetSSHUsername
	I0812 11:08:11.774946   39375 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3774/.minikube/machines/multinode-053297-m02/id_rsa Username:docker}
	I0812 11:08:11.851874   39375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:08:11.865610   39375 status.go:257] multinode-053297-m02 status: &{Name:multinode-053297-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0812 11:08:11.865660   39375 status.go:255] checking status of multinode-053297-m03 ...
	I0812 11:08:11.866098   39375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:08:11.866151   39375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:08:11.881546   39375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I0812 11:08:11.882019   39375 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:08:11.882506   39375 main.go:141] libmachine: Using API Version  1
	I0812 11:08:11.882531   39375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:08:11.882854   39375 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:08:11.883048   39375 main.go:141] libmachine: (multinode-053297-m03) Calling .GetState
	I0812 11:08:11.884474   39375 status.go:330] multinode-053297-m03 host status = "Stopped" (err=<nil>)
	I0812 11:08:11.884489   39375 status.go:343] host is not running, skipping remaining checks
	I0812 11:08:11.884497   39375 status.go:257] multinode-053297-m03 status: &{Name:multinode-053297-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 node start m03 -v=7 --alsologtostderr
E0812 11:08:30.975361   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-053297 node start m03 -v=7 --alsologtostderr: (38.537692519s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-053297 node delete m03: (1.80954023s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053297 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 11:18:30.976075   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053297 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.074759657s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053297 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053297
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053297-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-053297-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.171955ms)

                                                
                                                
-- stdout --
	* [multinode-053297-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-053297-m02' is duplicated with machine name 'multinode-053297-m02' in profile 'multinode-053297'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053297-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053297-m03 --driver=kvm2  --container-runtime=crio: (45.402276584s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-053297
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-053297: exit status 80 (205.890505ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-053297 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-053297-m03 already exists in multinode-053297-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-053297-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.49s)

                                                
                                    
x
+
TestScheduledStopUnix (115.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-232989 --memory=2048 --driver=kvm2  --container-runtime=crio
E0812 11:25:45.938150   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-232989 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.429051122s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-232989 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-232989 -n scheduled-stop-232989
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-232989 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-232989 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-232989 -n scheduled-stop-232989
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-232989
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-232989 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-232989
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-232989: exit status 7 (62.699514ms)

                                                
                                                
-- stdout --
	scheduled-stop-232989
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-232989 -n scheduled-stop-232989
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-232989 -n scheduled-stop-232989: exit status 7 (64.623046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-232989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-232989
--- PASS: TestScheduledStopUnix (115.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (212.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.760012347 start -p running-upgrade-530158 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0812 11:28:30.975547   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.760012347 start -p running-upgrade-530158 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.415687513s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-530158 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-530158 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.984639882s)
helpers_test.go:175: Cleaning up "running-upgrade-530158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-530158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-530158: (1.196329001s)
--- PASS: TestRunningBinaryUpgrade (212.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.738564ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-444300] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444300 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444300 --driver=kvm2  --container-runtime=crio: (1m29.473110269s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.860524911s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444300 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-444300 status -o json: exit status 2 (254.929121ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-444300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-444300
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-444300: (1.183912847s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.985584355 start -p stopped-upgrade-453361 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.985584355 start -p stopped-upgrade-453361 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (52.031181499s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.985584355 -p stopped-upgrade-453361 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.985584355 -p stopped-upgrade-453361 stop: (2.138734311s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-453361 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-453361 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.822639911s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444300 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.871023314s)
--- PASS: TestNoKubernetes/serial/Start (36.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.120425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.212544848s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.150954869s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-444300
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-444300: (1.41051474s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444300 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444300 --driver=kvm2  --container-runtime=crio: (23.849179146s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.512658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-824402 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-824402 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.724924ms)

                                                
                                                
-- stdout --
	* [false-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:30:38.675820   50520 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:30:38.675986   50520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:30:38.675998   50520 out.go:304] Setting ErrFile to fd 2...
	I0812 11:30:38.676005   50520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:30:38.676338   50520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3774/.minikube/bin
	I0812 11:30:38.677139   50520 out.go:298] Setting JSON to false
	I0812 11:30:38.678447   50520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4380,"bootTime":1723457859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:30:38.678536   50520 start.go:139] virtualization: kvm guest
	I0812 11:30:38.680896   50520 out.go:177] * [false-824402] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:30:38.682411   50520 notify.go:220] Checking for updates...
	I0812 11:30:38.682422   50520 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 11:30:38.683946   50520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:30:38.685284   50520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3774/kubeconfig
	I0812 11:30:38.686678   50520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3774/.minikube
	I0812 11:30:38.688505   50520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:30:38.690137   50520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:30:38.691884   50520 config.go:182] Loaded profile config "force-systemd-env-705953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:30:38.691990   50520 config.go:182] Loaded profile config "kubernetes-upgrade-535697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 11:30:38.692121   50520 config.go:182] Loaded profile config "stopped-upgrade-453361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0812 11:30:38.692225   50520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:30:38.729779   50520 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 11:30:38.731304   50520 start.go:297] selected driver: kvm2
	I0812 11:30:38.731350   50520 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:30:38.731370   50520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:30:38.733528   50520 out.go:177] 
	W0812 11:30:38.734826   50520 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0812 11:30:38.736235   50520 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-824402 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:30:37 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.89:8443
name: stopped-upgrade-453361
contexts:
- context:
cluster: stopped-upgrade-453361
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:30:37 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: stopped-upgrade-453361
name: stopped-upgrade-453361
current-context: stopped-upgrade-453361
kind: Config
preferences: {}
users:
- name: stopped-upgrade-453361
user:
client-certificate: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/stopped-upgrade-453361/client.crt
client-key: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/stopped-upgrade-453361/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-824402

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-824402"

                                                
                                                
----------------------- debugLogs end: false-824402 [took: 2.88316201s] --------------------------------
helpers_test.go:175: Cleaning up "false-824402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-824402
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-453361
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (120.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-693259 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-693259 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m0.714889246s)
--- PASS: TestPause/serial/Start (120.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-693259 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-693259 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.760813487s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-693259 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-693259 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-693259 --output=json --layout=cluster: exit status 2 (231.31885ms)

                                                
                                                
-- stdout --
	{"Name":"pause-693259","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-693259","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-693259 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-693259 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-693259 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-693259 --alsologtostderr -v=5: (1.019312602s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-093615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-093615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (58.110110916s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-993542 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-993542 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (1m48.81370329s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-093615 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bba94e54-0faf-46c9-90fd-ad6b366a3b28] Pending
helpers_test.go:344: "busybox" [bba94e54-0faf-46c9-90fd-ad6b366a3b28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bba94e54-0faf-46c9-90fd-ad6b366a3b28] Running
E0812 11:35:45.935522   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004476404s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-093615 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-093615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-093615 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-993542 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e96b1ad8-3dfc-493c-b2bc-5aaea56672a7] Pending
helpers_test.go:344: "busybox" [e96b1ad8-3dfc-493c-b2bc-5aaea56672a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e96b1ad8-3dfc-493c-b2bc-5aaea56672a7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004307299s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-993542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-993542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-993542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (679.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-093615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0812 11:38:30.976326   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-093615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (11m19.604730483s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-093615 -n embed-certs-093615
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (679.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-835962 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-835962 --alsologtostderr -v=3: (5.300109808s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-835962 -n old-k8s-version-835962: exit status 7 (64.433147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-835962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (304.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-581883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-581883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (5m4.956963501s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (304.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (603.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-993542 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0812 11:39:54.024248   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 11:40:45.936129   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 11:43:30.975780   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-993542 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (10m3.358760363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993542 -n no-preload-993542
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (603.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4930c51e-a227-4742-b74a-669e9bea4e75] Pending
helpers_test.go:344: "busybox" [4930c51e-a227-4742-b74a-669e9bea4e75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4930c51e-a227-4742-b74a-669e9bea4e75] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004003131s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-581883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-581883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.064856872s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-581883 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (622.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-581883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0812 11:47:08.983814   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
E0812 11:48:30.975624   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-581883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m22.065610176s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-581883 -n default-k8s-diff-port-581883
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (622.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-567702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-567702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (48.63655482s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-567702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-567702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.238141993s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-567702 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-567702 --alsologtostderr -v=3: (10.599468782s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-567702 -n newest-cni-567702
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-567702 -n newest-cni-567702: exit status 7 (64.133495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-567702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-567702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0812 12:03:30.975593   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
E0812 12:03:48.984968   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-567702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (57.407460949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-567702 -n newest-cni-567702
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-567702 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-567702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-567702 --alsologtostderr -v=1: (1.856945346s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-567702 -n newest-cni-567702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-567702 -n newest-cni-567702: exit status 2 (325.415134ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-567702 -n newest-cni-567702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-567702 -n newest-cni-567702: exit status 2 (285.048085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-567702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-567702 -n newest-cni-567702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-567702 -n newest-cni-567702
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.557095072s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.330620145s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0812 12:05:45.936356   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/functional-695176/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.36860154s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kpqrs" [f58a9708-d708-4f6b-9f88-2717c7cf121b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kpqrs" [f58a9708-d708-4f6b-9f88-2717c7cf121b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004748689s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7lpc5" [f03c2b81-4edb-478a-be01-7e8eddc22ec4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004425208s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r74cz" [58a26a6f-fa20-477f-aec2-5b542a351973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-r74cz" [58a26a6f-fa20-477f-aec2-5b542a351973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004769149s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-824402 exec deployment/netcat -- nslookup kubernetes.default
E0812 12:06:29.934464   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0812 12:06:44.538695   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.544008   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.555097   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.575987   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.616366   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.696851   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:44.857232   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:45.177416   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:06:45.295708   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
E0812 12:06:45.818427   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.344618817s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m21.100913864s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p7hmg" [870fcd1f-06cc-49aa-ba26-b1a7406ef9df] Running
E0812 12:06:49.659920   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005107046s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7n9f4" [4dfd17eb-31df-47ae-ada3-2f5af8fc5ea5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 12:06:54.780843   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7n9f4" [4dfd17eb-31df-47ae-ada3-2f5af8fc5ea5] Running
E0812 12:07:05.021148   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003821597s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0812 12:07:25.502011   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
E0812 12:07:46.737533   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m33.265346862s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hbldd" [f4a2c4f8-4891-4e3b-8a96-f3089d8cb9e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 12:08:06.462689   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/no-preload-993542/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-hbldd" [f4a2c4f8-4891-4e3b-8a96-f3089d8cb9e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004228969s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-88vkf" [77f0c5c6-0a42-46c5-902d-471e2fb22bf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-88vkf" [77f0c5c6-0a42-46c5-902d-471e2fb22bf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004417412s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0812 12:08:30.975271   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/addons-883541/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-824402 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.928409231s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g5lzn" [88b28d95-c0f5-4c17-b41f-99d5307e3e65] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004531099s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-phxlw" [4c0279ec-394e-482e-a0ee-f0caa1e611cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-phxlw" [4c0279ec-394e-482e-a0ee-f0caa1e611cb] Running
E0812 12:09:08.658407   10927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/old-k8s-version-835962/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003578647s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-824402 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-824402 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8zlwl" [70410225-c3ce-4b83-baad-cd1f6c12f505] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8zlwl" [70410225-c3ce-4b83-baad-cd1f6c12f505] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00360289s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-824402 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-824402 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
276 TestStartStop/group/disable-driver-mounts 0.14
280 TestNetworkPlugins/group/kubenet 2.93
289 TestNetworkPlugins/group/cilium 3.7
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-101845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-101845
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-824402 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19409-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:30:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.89:8443
name: stopped-upgrade-453361
contexts:
- context:
cluster: stopped-upgrade-453361
user: stopped-upgrade-453361
name: stopped-upgrade-453361
current-context: stopped-upgrade-453361
kind: Config
preferences: {}
users:
- name: stopped-upgrade-453361
user:
client-certificate: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/stopped-upgrade-453361/client.crt
client-key: /home/jenkins/minikube-integration/19409-3774/.minikube/profiles/stopped-upgrade-453361/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-824402

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-824402"

                                                
                                                
----------------------- debugLogs end: kubenet-824402 [took: 2.782294937s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-824402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-824402
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-824402 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-824402" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-824402

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-824402" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824402"

                                                
                                                
----------------------- debugLogs end: cilium-824402 [took: 3.519116673s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-824402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-824402
--- SKIP: TestNetworkPlugins/group/cilium (3.70s)

                                                
                                    
Copied to clipboard